Posts

  • Appending data to a document in Fauna

    I’ve been diving into faunadb recently. I found it via its integration with Netlify and its GraphQL support. But, this isn’t about either of those things!

    One feature I’ve enjoyed with fauna is its support for User-defined Functions (UDFs). I used to created stored procedures in my relational DB past and like being able to consolidate data-related logic on the database to keep that logic centralize and reduce round-trips from the service tier.

    I’ve been building some custom functions to use as GraphQL endpoints to present a consistent, managed view of the data for my service tier / front-end. To support this, I created a quick utility UDF that merges an arbitrary object into the data payload of a document.

  • Custom Sign Up Handler for Netlify

    I’ve started getting back into serverless engineering recently and decided to give Netlify a try. I appreciate their focus on JAMStack and the availability of server-side functions that integrate fairly well into my local dev model.

    One of the first tasks I tackled was integrating authentication with my backend. Netlify provides an Identity service which handles registration, confirmation, and log in. It also supports custom metadata which I wanted to use to tie the DB-side ID with the netlify identity but ran into problems. I tried to use Netlify’s built-in identity hooks but wasn’t able to update metadata as they describe so I created a custom solution.

  • React Hook Migration Strategy

  • Rust + WASM: Seeding the Game of Life

    After getting a basic game of life running in Rust + WASM, I wanted to expand it a bit further to allow seeding the initial state. Initially, I planned to allow an array of flags to seed any value but decided instead to use strings instead. It proved to be a bit more difficult but also more interesting.

  • Rust + WASM: Getting Started

    I’ve been curious about the UI framework engineering opportunities of a minimal UI engine built in Rust and compiled into WASM. I’m going to document my journey here!

  • What Makes a Good Pull Request?

    I get to spend a lot of time reviewing other engineers’ code. Over time, I’ve found that it has helped me become a better engineer as I learned new techniques to solve problems, explored parts of the code base that I haven’t worked in before, and improved my ability to communicate effectively to others.

    Through the course of my reviews, I’ve seen some common practices that I feel make for good pull requests. Some may be obvious but others are a bit more subtle.

  • Consolidating my content

    I decided this past week or so to try to resurrect my writing by fleshing out a post of weekly releases. That gave me just enoough momentum to also migrate content from my long forgotten Tumblr, Medium, and Code Mentor accounts.

    While each service has their place and provide a community of content, I wanted to own my own content and be in more control of its availability. I might still cross-post to those places but intend to use this site sourced from GitHub as the home for new content.

  • Weekly Releases

    We’ve recently got back into the habit of weekly releases in preparation for a 3.0.0 release of Enact and it has reminded how useful they are. They add some much needed structure and predictability which help us ship a consistently high quality product.

  • Git Alias: Merge Conflicts

    Originally posted on Code Mentor: https://www.codementor.io/ryan286/git-alias-merge-conflicts-dmkcw8r3p

    Like many engineers, I spend a lot of time working with git. It’s an incredibly powerful tool with more options than most people ever need. If you work in the CLI like I do, you’ve probably added a few aliases to help make you more productive. Today, I added a new one to list merge conflicts.

  • Working with React Context

    Originally published on Code Mentor: https://www.codementor.io/ryan286/working-with-react-context-9vgn0gdof

    One of the most important concepts of React is unidirectional data flow. Data enters the system at one point and flows downstream with each component filtering and augmenting that data for its children. This is an incredibly simple but also very powerful paradigm that enables you to build complex systems with simple data flows.