Quantcast
Channel: NoRedInk
Viewing all 193 articles
Browse latest View live

Picking Dates with Elm

$
0
0

Introduction

A frontend developer sometimes just wants to drop a JavaScript widget somebody else made into their application and move on. Maybe that widget is a slider, or a menu, or a progress bar, or a spinner, or a tab, or a tooltip that points in a cool way.. And sometimes that same frontend developer would like to write their application in Elm.. Should this developer wait to use Elm until the widget they want is rewritten in Elm? Should they rewrite everything that they need?

NoRedInk ran into this problem a few years ago with datepickers. Now, there are some Elm datetimepicker options, but at the time we needed prioritize building a datepicker from scratch in Elm against using the JS datepicker library we had been using before. We put building an Elm datepicker on the hackday idea pile and went with the JS datepicker library. Even with the complications of dates and time, using a JS datepicker in an Elm application ended up being a fine experience.

So our frontend developer who wants a JS widget? They can use it.

Readers of this post should have some familiarity with the Elm Architecture and with Elm syntax, but do not need to have made complex apps. This post is a re-exploration of concepts presented in the Elm guide (Introduction to Elm JavaScript interop section) with a more ~timely~ example (that is, we’re going to explore dates, datepickers, and Elm ports).

On Dates and Time

The local date isn’t just a question of which sliver of the globe on which one is located: time is a consideration of perception of time, measurability, science, and politics.

As individuals, we prefer to leave datetime calculations to our calendars, to our devices, and to whatever tells our devices when exactly they are. As developers, we place our faith in the browser implentations of functions/methods describing dates and times.

To calculate current time, the browser needs to know where the user is. The user’s location can then be used to look up the timezone and any additional time-weirdnesses imposed by the government (please read this as side-eyes at daylight saving–I stand with Arizona). When you run new Date() in your browser’s JS console, apart from constructing the Date you asked for, you’re actually asking for your time as a function of location.

Supposing we now have a Date object that correctly describes the current time, we have the follow-up problem of formatting dates for different users. Our users might have differing expectations for short-hand formats and will have differing expecations for long-hand forms in their language. There’s definitely room to make mistakes; outside of programming, I have definitely gotten confused over 12 hour days versus 24 hour days and mm/dd/yyyy versus dd/mm/yyyy.

Okay, so computers need a way to represent time, timezones, daylight savings. We use the distance in seconds from the epoch to keep track of time. (If you read about the history of the Unix epoch, that’s not as simple as one might hope or expect either!) Then we need a language for communicating how to format this information for different locales and languages.

We can represent dates in simple and universial formats. We can use semantic and consistent (or close-to semantic and close-to consistent) formatting strings. We can be careful as we parse user input so that we don’t mix up month 2 or day 2. But it’s still really easy to make mistakes. It’s hard to reason about what is going, did go, or will go wrong; sometimes, when deep in investigating a timezone bug, it’s hard to tell what’s going right!

So suppose we’ve got ourselves a great spec that involves adding a date input to a pre-existing Elm app. Where do we start? What should we know?

It’s worth being aware that the complexity of date/time considerations of the human world haven’t been abstracted away in the programming world, and there are at times some additional complications. For example, the JavaScript Date API counts months from zero and days from one. Also worth noting: Dates in Elm actually are JavaScript Date Objects, and date Objects in JavaScript rely on the underlying JavaScript implementation (probably C++).

On Interop

The way that Elm handles interop with JavaScript keeps the world of Elm and the world of JavaScript distinct. All the values from Elm to JS flow through one place, and all the values from JS to Elm flow through one place.

Tradeoffs:

  1. It’s possible to break your app

    Suppose we have an Elm app that is expecting a user-named item to be passed through a port. Our port is expecting a string, but oops! Due to some unanticipated type coercion, we pass 2015 through the port rather than "2015". Now our app is unhappy–we have a runtime error:

    Trying to send an unexpected type of value through port userNamedItem: Expecting a String but instead got: 2015

  2. Your Elm apps have JS code upon which they are reliant

    Often, this isn’t a big deal. We used to interop with JavaScript in order to focus our cursor on a given text input dynamically (Now, we use Dom.focus). It’s a nice UX touch, but our site still works without this behavior. That is, if we decide to load our component on a different page, but fail to bring our jQuery code to the relevant JS files for that page, the user experience degrades, but the basic functionality still works.

Benefits:

  1. We can use JavaScript whenever we want to

    If you’ve got an old JS modal, and you’re not ready to rewrite that modal in Elm, you’re free to do so. Just send whatever info that modal needs, and then let your Elm app know when the modal closes.

  2. The single most brittle place in your code is easy to find

    Elm is safe, JavaScript is not, and translating from one to the other may not work. Even without helpful error messages, it’s relatively easy to find the problem. If the app compiles, but fails on page load? Probably it’s getting the wrong information.

  3. We keep Elm’s guarantees.

    We won’t have to worry about runtime exceptions within the bulk of our application. We won’t have to worry about types being inconsistent anywhere except at the border of our app. We get to feel confident about most of our code.

So.. how do we put a jQuery datepicker in our Elm application?

For this post, we’ll be using the jQuery UI datepicker, but the concepts should be the same no matter what datepicker you use. Once the jQuery and jQuery UI libraries are loaded on the page and the basic skeleton of an app is available on the page, it’s a small step to having a working datepicker.

Our skeleton:


{- *** API *** -}
port module Component exposing (..)

import Date
import Html exposing (..)
import Html.Attributes exposing (..)


main : Program Never Model Msg
main =
    Html.program
        { init = init
        , view = view
        , update = update
        , subscriptions = always Sub.none
        }


init : ( Model, Cmd Msg )
init =
    ( { date = Nothing }, Cmd.none )



{- *** MODEL *** -}


type alias Model =
    { date : Maybe Date.Date }



{- *** VIEW *** -}


view : Model -> Html.Html Msg
view model =
    div [ class "date-container" ]
        [ label [ for "date-input" ] [ img [ alt "Calendar Icon" ] [] ]
        , input [ name "date-input", id "date-input" ] []
        ]



{- *** UPDATE *** -}


type Msg
    = NoOp


update : Msg -> Model -> ( Model, Cmd Msg )
update msg model =
    case msg of
        NoOp ->
            model ! [ Cmd.none ]

Next up, let’s port out to JS. We want to tell JS-land that we want to open a datepicker, and then we also want to change our model when JS-land tells us to.


port module Component exposing (..)

import Date
import Html exposing (..)
import Html.Attributes exposing (..)
import Html.Events exposing (..) -- we need Events for the first time


main : Program Never Model Msg
main =
    Html.program
        { init = init
        , view = view
        , update = update
        , subscriptions = subscriptions
        }


init : ( Model, Cmd Msg )
init =
    ( { date = Nothing }, Cmd.none )



{- *** MODEL *** -}


type alias Model =
    { date : Maybe Date.Date }



{- *** VIEW *** -}


view : Model -> Html.Html Msg
view model =
    div
        [ class "date-container" ]
        [ label [ for "date-input" ] [ img [ alt "Calendar Icon" ] [] ]
        , input
            [ name "date-input"
            , id "date-input"
            , onFocus OpenDatepicker
              -- Note that the only change to the view is here
            ]
            []
        ]



{- *** UPDATE *** -}


type Msg
    = NoOp
    | OpenDatepicker
    | UpdateDateValue String


update : Msg -> Model -> ( Model, Cmd Msg )
update msg model =
    case msg of
        NoOp ->
            model ! [ Cmd.none ]

        OpenDatepicker ->
            model ! [ openDatepicker () ]

        UpdateDateValue dateString ->
            { model | date = Date.fromString dateString |> Result.toMaybe } ! []



{- *** INTEROP *** -}


port openDatepicker : () -> Cmd msg


port changeDateValue : (String -> msg) -> Sub msg


subscriptions : Model -> Sub Msg
subscriptions model =
    changeDateValue UpdateDateValue

Note that here, we’re also carefully handling the string that we’re given from JavaScript. If we can’t parse the string into a Date, then we just don’t change the date value.

Finally, let’s actually add our Elm app and datepicker to the page.


$(function() {
  elmHost = document.getElementById("elm-host")
  var app = Elm.Component.embed(elmHost);

  $.datepicker.setDefaults({
    showOn: "focus",
    onSelect: sendDate,
  });

  app.ports.openDatepicker.subscribe(function() {
    $("#date-input").datepicker().datepicker("show");
  });

  function sendDate (dateString) {
    app.ports.changeDateValue.send(dateString)
  }
});

Checking this out in the browser (with a few additional CSS styles thown in):

All we have to do is embed our app, open the datepicker when told to do so, and send values to elm when appropriate! This is the same strategy to follow when working with any JS library.

Fancy Stuff

Storing the final-word on value outside of a UI component (i.e., the datepicker itself) makes it easier to handle complexity. At NoRedInk, engineers have built quite complicated UIs involving datepickers:

NoRedInkers changed the displayed text from a date-like string to “Right away”–and made /right away/i an allowed input

We can check to see if the selected date is the same as now, plus or minus some buffer, and send a string containing that information to Elm. This requires a fair amount of parsing and complicates how dates are stored in the model.

A simplified version of a similar concept follows–we add some enthusiasm to how we’re displaying selected dates by adding exclamation marks to the displayed date.

Note that this introduces a new dependency for date formatting (rluiten/elm-date-extra).


...

import Date
import Date.Extra.Config.Config_en_us
import Date.Extra.Format

...

viewCalendarInput : Int -> Maybe Date.Date -> Html Msg
viewCalendarInput id date =
    let
        inputId =
            "date-input-" ++ toString id

        dateValue =
            date
                |> Maybe.map (Date.Extra.Format.format Date.Extra.Config.Config_en_us.config "%m/%d/%Y!!!")
                |> Maybe.withDefault ""
    in
        div [ class "date-container" ]
            [ label [ for inputId ] [ viewCalendarIcon ]
            , input
                [ name inputId
                , Html.Attributes.id inputId
                , value dateValue
                , onFocus (OpenDatepicker inputId)
                ]
                []
            ]

...

We can make the value of the input box whatever we want! Including a formatted date string with exclamation marks on the end. Note though that if we make whatever is in our input box un-parseable for the datepicker we’re using, we’ll have to give it more info if we want it to highlight the date we’ve selected when we reopen it. Most datepickers have a defaultDate option, and we can use take advantage of that to handle this case.

Note that we’ve also generalized our viewCalendarInput function. There are some other changes that we need to make to support having multiple date input fields per page–like having more than one date field on the model, and sending some way of determining which date field to update back from JS.

For brevity’s sake, we’ll exclude the code for supporting multiple date inputs per page, but here’s an image of the working functionality:

NoRedInkers created an autofill feature

Leveraging the type system, we can distinguish between user-set and automagically-set dates, and set a series of date steps to be any distance apart from each other by default. The fun here is in determining when to autofill–we shouldn’t autofill, for instance, after a user has cleared all but one autofilled field, but we should autofill if a user manually fills exactly one field.

We actually decided that while this was slick, it would create a negative user experience; we scrapped the whole autofill idea before any users saw it. While there was business logic that we needed to rip out in order to yank the feature, we didn’t need to change any JavaScript code whatsoever. Writing the autofill functionality was fun, and then pulling out the functionality went really smoothly.

NoRedInkers supported user-set timezone preferences

I recommend rluiten/elm-date-extra, which supports manually passing in a timezone offset value and using the user’s browser-determined timezone offset. Thank you to Date-related open source project maintainers and contributors!

Concluding

Someday the Elm community will have a glorious datepicker that developers use by default. For now, there are JavaScript datepickers out there available for use (and some up-and-coming Elm datepicker projects as well!). For now, for developers not ready to switch away from jQuery components, interop with JavaScript can smoothly integrate even very effect-heavy libraries.

There are components that don’t exist in Elm yet, but that shouldn’t stop us from using them in our Elm applications and it shouldn’t stop us from transitioning to Elm. Projects that need those components can still be written in beautiful, easy-to-follow, easy-to-love Elm code. Sure, it would be nice if it were all in Elm–for now, we can use our JavaScript component AND Elm!


Tessa Kelly
@t_kelly9
Engineer at NoRedInk


Learning Elm from scratch

$
0
0

Hello from a brand new Junior Engineer at NoRedInk! I started working at NoRedInk in January and it’s both my first job as a Software Engineer and my first time using Elm. Thinking about learning Elm? Here’s what it was like to learn Elm from scratch!

The Beginning

A brief background on my programming experience: in 2016, I attended a coding bootcamp that taught Ruby on Rails and JavaScript and stayed on for a year as a TA. I’d done some programming in Matlab during college, but little else before attending the bootcamp. After receiving an offer from NoRedInk, I spent three computer-free months road tripping across the United States. This all means that I had a) very little experience working with purely functional and statically-typed languages, b) programming skills coated in a three-month layer of road trip dust, and c) equally matched levels of excitement and terror over learning Elm well enough to contribute to NRI’s codebase.

My initial terror towards starting a new job and learning a new language quickly abated. I spent 100% of my first week pairing with more experienced engineers on the team, which was both educational and fun (the NRI team is an amusing bunch). Nonetheless, my confusion abounded, stemming mainly from the size of the codebase (Help! where does everything live?) and my unfamiliarity with Elm. While some parts of Elm made intuitive sense, other aspects of the language felt perplexing and mysterious.

Battling Confusion

Elm was the first language to introduce me to type signatures. I was told that their purpose was to provide helpful compiler errors in lieu of unhelpful runtime errors but, as an inexperienced user, they mostly provided me with confusion. During my first day writing code at NoRedInk, I encountered the Html.map function while looking at a reusable view and was rather perplexed. Its type signature looks like this:


map : (a -> msg) -> Html a -> Html msg

I wasn’t quite sure what Html.map’s type signature meant by (a -> msg), nor did I understand what a or msg were supposed to be. Beyond making sense of its type signature, I had little understanding of why we needed to use Html.map in the first place.

While I wish I could report that I went home that day with a firm grasp on Html.map (and all things Elm-related really), the reality was that it took a while longer for the pieces to come together. Html.map lies at the intersection of several concepts that were new to me and I was missing too many pieces of context to understand how it worked. Before I could understand Html.map, I needed to have a grasp on type signatures, Elm architecture, and the general idea of passing around functions as arguments. However, as a brand-new Elm user, I was not yet aware that I was missing these pieces of context and felt frustrated when I didn’t immediately understand what was going on.

Luckily, I had a team of experienced Elm developers at my disposal who could point me towards a number of useful resources and learning strategies. Here are some resources and strategies that I found particularly helpful:

The Elm Tutorial: The Elm Tutorial is a great resource for beginners. Walking through the tutorial from beginning to end gave me a good high level overview of Elm architecture and provided me with pieces of context I didn’t even know I was missing. After finishing the tutorial, I felt significantly less confused about how Elm deals with user interactions and understood why a view function returns an Html msg. Working through the tutorial was also a good way to handle Elm code in a simplified context, rather than trying to understand what was going on in NoRedInk’s complex codebase.

The Elm Package Docs: Whenever I’m confused about a function or am wondering whether a function that I need exists, I consult the Elm Package docs. While they are a stellar resource, the Elm Package docs can also feel overwhelming. They contain a lot of information, and it can feel difficult to know where to start. For beginners interested in creating a basic web app, referencing the Core and HTML packages provides a good starting point. I also find it helpful to think about what type signature will solve the problem at hand and search for that type signature when looking for functions. For example, if I’m looking for a function that takes in an Int and returns a String, I can grep for Int -> String in a specific package. Like learning a new language, learning to navigate documentation takes time, and as I’ve worked more with Elm, I’ve become more confident in using the docs to look up what I need.

Drawing connections: Another strategy that has helped me is drawing connections between unfamiliar concepts and concepts that I understand well. In the case of Html.map, it was helpful to look at List.map. Just like JavaScript’s map function, Elm’s List.map requires a function and a list. It uses the function to transform each element in the list into a new element:


map : (a -> b) -> List a -> List b

A member of my team pointed out that Html.map works similarly. Instead of transforming elements of list, it transforms a msg. Drawing a connection to a concept that I did understand well helped Html.map click.

The Elm Community: As an employee at NoRedInk, I have the unique advantage of working with experienced members of the Elm community on a daily basis. If I have a question, I need look no further than the desk next to me to seek help. Being unafraid to ask “stupid” questions has also been extremely valuable to my learning process. Sometimes, there is no better way to resolve confusion than admitting that you don’t know something and asking a human being who does!

Even if you lack the convenience of having experienced Elm developers one desk over, there are still several ways to connect with the Elm community, including the Elm Subreddit, Elm’s Slack, and Elm meetup groups. In my experience, the Elm community is very friendly and wants to help you learn, whatever your background and current level of knowledge. Meander into the Elm grove and say hello!

Making Peace with Confusion

My ultimate piece of advice is to dive right in and try to build something if you’re thinking of learning Elm. There are plenty of resources out there to help you, and there is no better way to start learning a language than to… start learning it!

I’ve grown to love parts of the language that I initially found confusing and frustrating. Type signatures and compiler errors are my new best friends (along with my great new coworkers, of course). Type signatures force me to think a few moves in ahead and be more conscious of the code that I’m writing. Compiler errors tell me exactly what I’m doing wrong without forcing me to embark on the grand debugger chase-down of 2017. I’d almost call debugging my code… fun?

Sometimes, the amount of context needed to understand a seemingly simple snippet of code can feel overwhelming. It’s okay to need more context! Learning a new language isn’t about understanding everything immediately; it’s about building foundations and circling back to complex topics later. It can be frustrating not to understand concepts at first glance, but as I’ve picked up more languages, I’ve accepted confusion as a natural part of the process. I am still new to Elm and have a lot to learn, but my fear of learning the language has greatly diminished. It’s all fun from here on out! I’m excited to be working at NoRedInk and look forward to sharing more about the joy and confusion that accompany learning a new language. Interested in joining us? We’re hiring!


Brooke Angel
Engineer at NoRedInk

Swapping Engines Mid-Flight

$
0
0

A few months ago, I had the privilege of joining the product team for our First Design Sprint. Starting with a huge user pain-point, we used the Design Sprint process to arrive at a solution with a validated design (yay, classroom visits!), and a working prototype. If you’re curious about that process, I highly recommend you give that post a read. Long story short (and simplified): students practice on NoRedInk to gain mastery; the old way of calculating mastery frustrated students… a lot; the new way of calculating mastery feels much more fair.

This post is about what came after the design sprint:

We replaced the core functionality of our site with a completely new experience, without downtime, without a huge feature branch full of merge conflicts, and while backfilling 250 million records.

Actually, this post is only the first of two, in which I hope to discuss the strategies we did and didn’t use to build and deploy this feature. A future post will be a deep-dive into backfilling the 250 million rows without requiring weeks of wall time. I make no claim we did anything original or even unexpected. But, I hope reading this particular journey, and my missteps along the way, will bring together some pieces that help you in your own work.

The Omnibus Strategy

I’ve been working at NoRedInk for 4 years - back since the engineering team consisted of just a handful of us – and things have changed a lot. In the early days, when we had a big new feature we would: 1. Start an omnibus feature branch 1. Create feature branches off of the omnibus branch 1. Review that feature branch, and merge it into the omnibus branch 1. Resolve all the merge conflicts in the omnibus branch that crop up as other engineers merged code into master 1. Then, deal with merge conflicts between the omnibus branch and any/all feature branches 1. Keep creating, reviewing, and merging feature branches until the omnibus branch is fully featured 1. QA the completed omnibus branch 1. Merge the omnibus branch into master and deploy

As we added more team members, and our features got more complex, the merge conflicts became a nightmare. I had heard this could be avoided by using feature flags, but (though I’d never actually tried it) I’d decided that the resulting code complexity wasn’t worth it. Maybe I was right back when we had 3 engineers, but by the time we were 6+ - quite frankly - I was dead wrong.

The Flipper Strategy

Around year 3, we started using feature flags for large features (in particular, we use the Flipper gem) thanks to some polite prodding by the trio of Charles, Marica, and Rao. For the uninitiated, this produces code similar to the following all over your codebase:


if FeatureFlag[:new_thing].enabled?
  do_fancy_new_thing()
else
  do_old_thing()
end

As long as that feature flag is turned off, your new code has no effect. The magical win you get when you write code that doesn’t affect users is you can merge every little PR about your new feature directly into master! No extended merge conflicts. No branches off of branches. And if you’re using feature flags, you can have tests for both the new and old functionality co-exist. Plus when you’re ready, you can turn the new feature on (and back off) without a deploy.

The new approach looks like this: 1. Start a feature branch off of master 1. Code up a small piece of your new feature, and put that functionality behind a feature flag. Make sure the old functionality still works 1. Review that branch as if it were any other PR, except now we need to make sure both the new functionality works and the old functionality is unchanged 1. Merge your PR into master

It’s almost exactly the same as development-as-usual.

Side not: you don’t need feature flags to merge not-yet-released code. As long as the new functionality is disabled (e.g. if false) or no-op (e.g. writing data to an as-of-yet unused table) you’re in good shape. What feature flags give you is an easy way to toggle functionality in tests, during QA, and on production – so your “disabled” functionality can also be easily verified and tested.

Running Two Different Engines at Once

The first talk I heard about migrating between systems with a lot of usage was a talk in 2010 by Harry Heymann at Foursquare. They were moving from PostgreSQL to MongoDB while users were “checking in” ~1.6M times / day. They followed a pretty clean approach:

  1. Build the new system to run in parallel with the old system. Write to both systems, but keep reading from only the old system.
  2. Validate that new system is running as expected. At this point, we’re confident all data moving forward is good.
  3. Backfill the new system.
  4. Swap! Start reading from the new system - and you’re live!
  5. Retire the old system.

“Swap!” in our case, meant turning on the feature flag.

This seemed like the right approach. Even our usage numbers are similar – our usage today is about 5x theirs in 2010.

The key difference for us is that Foursquare had two systems that were expected to work identically, we have two systems designed to work completely differently. One example: if a student answers a question incorrectly on the site - the old system would take away 50% of her mastery points, - the new system doesn’t take away any points, but requires her to get three questions correct in a row before she can get points in the future.

So, here’s the problem. Let’s imagine Susan is doing her homework while we’re writing to both systems. At this point, the “Old System” is still what users are seeing. The following are real mastery score calculations from both systems:


| Susan        | Old System Score | New System Score |
------------------------------------------------------
| initial      |         0        |         0        |
| correct      |        20        |        20        |
| incorrect    |        10        |        20        | Scores don't match anymore !!!
| correct      |        10        |        20        |
| correct      |        30        |        20        |
| correct      |        50        |        20        |
| correct      |        70        |        40        |
| correct      |        90        |        60        |
| correct      |       100  done! |        80        |

Great! Susan is done with her homework, and she has a grade of 100. Then tomorrow, we swap to the new system. Suddenly, her grade just dropped to a 80! I’ll let you imagine how furious students and teachers would be if we let that happen.

We’re using feature flags to deploy new code right away, we’re writing to both systems just like Foursquare… I just need everything to match when we flip the feature flag.

I came up with a plan. I’d run the backfill script on the historical data and all the recent data. That way, we overwrite all “New System” data so that it would perfectly match “Old System” scores. Susan’s “New System Score” gets overwritten to be 100, and crisis averted. We’d just have to bring the site down for a couple hours on the weekend so there wouldn’t be any additional writes while the script is running.

Here’s Susan again:


| Susan        | Old System Score | New System Score |
------------------------------------------------------
| initial      |         0        |         0        |
| correct      |        20        |        20        |
| incorrect    |        10        |        20        | Scores don't match anymore !!!
| correct      |        10        |        20        |
| correct      |        30        |        20        |
| correct      |        50        |        20        |
| correct      |        70        |        40        |
| correct      |        90        |        60        |
| correct      |       100  done! |        80        |

        TAKE THE SITE DOWN FOR MAINTAINANCE

| RUN SCRIPT   |       100        |       100        | Scores match again !!!

             BRING THE SITE BACK UP

There are two problems with this. One, my estimate of “a couple hours of downtime” turned out to be wildly optimistic (I’ll talk more about how wildly in a future post). But moreover, I was solving the wrong problem: there was no reason to let the scores get out of sync to begin with…

Running Two Different Engines in Sync

Foursquare had the right idea, I’d just been applying it wrong. We needed to sync up the two datastores first, and only afterwards start using the new calculation. The key was to write to both datastores with identical values until turning on the feature flag. So, here’s the plan we actually used (changes in bold):

  1. Build the new datastore to run in parallel with the old system. Write the values from the old system to both datastores, but keep reading from only the old datastore.
  2. Validate that new system is recording the same values. At this point, we’re confident all data moving forward is good.
  3. Backfill the new system.
  4. Swap! Turn on the feature flag: start reading from the new system, and use the new calculation.
  5. Retire the old system.

Now Susie’s scores will be identical in both systems, and there’s no need to bring the site down before swapping to the new system.

In Conclusion

So what have I learned? First, be careful what lessons you take from other’s experience. And, if you think you need to take the site down to make a change, consider again very carefully.

If you notice anything I missed or got wrong, I’d love to hear about it and keep learning - please write to me and let me know. Thanks!


Josh Leven
@thejosh
Engineer at NoRedInk

A Day in the Life of a Curriculum Specialist

$
0
0

Stephanie has been a Curriculum Specialist at NoRedInk since June 2016. Before joining the company, she created literacy curriculum and assessments for a charter network in New York City. Previously, she taught middle school English in Madrid. At NoRedInk, she feels lucky to spend all day thinking deeply about how to leverage technology to support students’ development as writers.

8:45 a.m. - I arrive at the office! It’s pretty quiet at this time—a few of us like to get in early, while others may opt to work from home or to commute in mid-morning. I grab some cereal from our snack room and spend some time skimming the EdSurge newsletter. I always enjoy reading about the challenges and successes that other edtech products experience—there are often lessons we can learn vicariously!

9:15 a.m. - I sit right behind two of our designers, so I often get sneak peeks of new features they’re working on. For the past few months, our designer Becca has been gathering input from teachers and exploring some potential changes to the site’s assignment creation form. Today, she shows me and another one of our colleagues a recent mock-up of the new form. We discuss how our curriculum can be presented most helpfully so that teachers can easily determine what exercises to prioritize and locate topics that align with their state standards.



10:00 a.m. - My colleague Nellie and I meet in a room named “The Arena.” (All of our rooms are named after settings from the top student interests on the site—in this case, The Hunger Games.) We’re in the midst of designing a new “taxonomy,” our name for the scope and sequence of exercises that aims to help students master a larger skill. In this case, we’re focusing on transition words and phrases. Previously, our team researched the topic and established high-level objectives for the pathway. We also drafted sample exercises that we thought could help students achieve these objectives. Now, we’re going to take a close look at our draft and consider which topics we might want to add, cut, or alter.

On the whiteboard in The Arena, Nellie and I sketch exercises and discuss the interfaces that we think would best teach the concept. We note any new technical or design needs to share with our Product and Engineering teams.



12:00 p.m. - Every day, our Curriculum team holds “standup,” a quick, 20-minute meeting where we address issues, ask questions, and make announcements that are relevant to the whole team. One of our team members is based out of Boston, so we log into a Google Hangout so that he can join us on the monitor. Today, one topic of discussion is our upcoming classroom observation. We’ll be testing a couple of new exercises and lessons in a local school to see how helpful they are to students. Observations provide us with crucial data in our curriculum development process. For now, we check in to ensure that we’re all clear on our plan!

12:25 p.m. - It’s Thursday, so it’s a food truck day! Every Tuesday and Thursday, a different selection of food trucks park themselves right in front of our office. We pop downstairs to see what the offerings are.

1:00 p.m. - It’s time for our Support team meeting! I love answering customer support tickets because the process helps me to put myself in teachers’ and students’ shoes. Our Support team is made up of members of the Curriculum and Customer Success teams; most of us are former teachers ourselves. Every Thursday, we gather to discuss any important updates or bugs that have cropped up during the week. This week, we’re also spending some time recording teacher feedback in our “Feature Requests Log.” Whenever a teacher or student makes a suggestion, we record it so that we can identify trends and provide helpful context to our Product team as they consider improvements to the site. Today, we’re logging teacher feedback that we collected during live professional development sessions. We’re happy when we notice that we already have projects in the works to address many of teachers’ concerns, but we also spot some great new suggestions.



2:00 p.m.- We’ve enlisted the help of our user researcher, Christa, to dig into the data on a learning pathway that we released a couple months ago: Topic Sentences. We’re eager to determine which topics in the pathway students have found easiest and most challenging, and whether these results align with our expectations. We’ll use the data to identify any outliers and make adjustments accordingly.

2:30 p.m. - I love pairing with team members on projects, but to build curriculum, independent work time is also essential. Today, I’m working on “approvals” for our Claims, Evidence, and Reasoning learning pathway. This means that I’ll review all the questions our team has written before they go live on the site. I’ll consider: Are there any typos? Is the writing high quality? Does each question teach the objective we set for this topic? Are the questions fair? Is the subject matter engaging? I grab The Chicago Manual of Style to look up a rule about using hyphens, and I leaf through The Book Thief to double-check a quote.



4:00 p.m. - Next, our team begins a final “Content QA.” We each log in and explore the pathway from a student’s perspective, answering questions correctly and incorrectly. We evaluate whether the flow of topics makes sense and whether the lessons are helpful. Seeing the questions live on the site is also a great way to spot any bigger-picture gaps we may have overlooked earlier in the process when we were focusing intensely on the details.

5:30 p.m. - I grab my jacket and join the group of NoRedInkers gathering by the door. Every five weeks, we hold a book club. These meetings usually include pizza, laughter, and thoughtful discussion. It’s always a pleasure to hear others’ perspectives and spend non-work time together. I can’t wait to discuss this month’s pick!


The Curriculum Team



Stephanie Wye
Curriculum Specialist at NoRedInk
We’re hiring! Check out our job postings at https://www.noredink.com/careers

Ruby Threading: Some Practical Lessons

$
0
0

I was recently working on backfilling about 250,000,000 rows of data. As you may have read in Jocey’s post about our First Design Sprint, we were in the process of swapping out our old mastery experience for a brand new one. The rake task I initially wrote to do the backfill was far too slow, and I needed to find ways to speed it up. Last month I wrote a post about swapping out mastery engines and next month I’ll be posting one about backfilling the data – but in the process I ran into a few surprises specifically around threading in Ruby. This post is my attempt to keep you from bumping into those same mistakes.

First of all, if you are looking to parallelize a rake task or background job, managing threads by hand is probably not the best solution. This approach is often more complex than the alternatives, as it requires you to manage the life cycle of each thread, their coordination, and their access to shared state. In most cases, I’d recommend using a background job tool like Resque (forking), or Sidekiq (threaded). But, if you’re dead-set on managing your own thread pool in Ruby, there are a few things you should know.

When Ruby Threads Are Helpful

Threads do have a few advantages:

  • Threading can save you a lot of memory over processes. Multiple threads will share a single instance of a Rails application. Multiple processes each need their own copy. If you want to run 10 threads, and your application uses 50MB of memory: threads will save you upwards of 450MB of memory. Not too shabby.
  • Threads give you a lot of control over their execution. While multiple processes are scheduled primarily by your OS, threads can be orchestrated by you and your program.

If you are using MRI Ruby, there is one extra thing to consider: the infamous Global Interpreter Lock (GIL). Many things have been written on the GIL, and I encourage you to dive in deeper. But for now, in a nutshell, the GIL means that:

No matter how many threads you have, and no matter how many cores your computer has – at any given moment only one thread on one core will ever be running within your Ruby process.

So, if you have a complicated calculation to perform, dividing that calculation up amongst many threads wins you… nothing. A single CPU core will be responsible for all the work, only one thread will be running at any given time, pretty much the same as if you had written your calculation to be single threaded. However, if you have a long-running task which is frequently waiting on an external service (e.g. MySQL queries), it’s a bit of a different story.

Let’s say you have thousands upon thousands of SQL queries to run. In Ruby, when one thread is waiting on a response from the database, that thread will yield control to another thread. That second thread can then assemble and perform a different query, and start waiting on its response. Maybe, at this point, the first thread has received a response from the database and continues on its merry way.

In my case, the queries I needed to run were expensive and the application was spending ~50% of the time waiting on the database. This is a great candidate for speeding up using threading in Ruby. With multiple threads, we can have one thread doing work while another is waiting for the database.

Side notes:

  1. The GIL is a lock. The currently running thread holds that lock and no other thread can run until it releases the lock.
  2. When an MRI Ruby thread wants to do any IO, it actually calls out to the kernel to perform that IO. In kernel-space the GIL doesn’t apply! Ruby releases the GIL as soon as the request has been sent to the kernel, at which point another thread can run.
  3. JRuby and Rubinius do not have a GIL, then ruby threads are more broadly useful. E.g. unlike on MRI, on these platforms, threading can be used to exploit multiple cores.

Adding Threads

To add threading we need to:

  1. Have a way to distribute our problem between the threads
  2. Create and run each of the threads

There are a few different ways to divide up a problem between threads. If you’re familiar with background jobs, then you’ve seen the use of a queue where each worker pulls its next job off of that queue.

Side note, if multiple threads are accessing the same queue, you need a queue which is thread-safe so that those threads don’t step on each other’s toes. Thread safety is a great topic, but I’ll leave it to other blog posts like this one.

In my case we were iterating through a long list of user ids, so I can avoid worrying about thread safety by dividing up those user ids amongst each of the threads in advance. Each thread manages its own list of ids and nothing is shared between threads.

Creating a Thread in Ruby is surprisingly easy:


Thread.new {
  print "I'm running in a thread. Woohoo!!"
}

However, as we’ll see, there are quite a few gotchas to be aware of. The first one you see in any Ruby threading tutorial: your program will happily exit even if your threads haven’t finished. If you want the program to wait for all threads to finish, it’s up to you to say so. For example, this code:


5.times do |i|
  Thread.new {
    sleep(1)
    print "I'm running in thread #{i}. Woohoo!!"
  }
end
print "All done, exiting"

will produce the following output:


All done, exiting

The main thread (your program) creates each thread. All the threads start, and they will each start sleeping. But before they get to their print statements the main thread prints “All done, exiting” and exits. And when the main thread exits, all threads it created are killed as well.

The key is to join each thread before moving on. The Thread#join function forces the current thread to wait until that thread passes control back (either by exiting, or explicitly calling a function like Thread.stop).

Collect all the threads, call join on each, and we get the output we want:

5.times do |i| threads.push Thread.new { sleep(1) print “I’m awake in thread #{i}. Woohoo!!” } end

threads.each { |thread| thread.join }

print "All done, exiting"

will produce the following output:


I'm awake in thread 1. Woohoo!!
I'm awake in thread 2. Woohoo!!
I'm awake in thread 5. Woohoo!!
I'm awake in thread 3. Woohoo!!
I'm awake in thread 4. Woohoo!!
All done, exiting

The order is non-deterministic, but the “sleeping” threads are guaranteed to finish before the main thread.

So lets take a look at that rake task I want to speed up. Here’s the script before threading:

task :sync_mastery_scores, [:start_id, :max_id] => :environment do |_, args| ids = ( args[:start_id] .. args[:max_id] )

ids.step(BATCH_SIZE) do |first_id|

last_id = first_id + BATCH_SIZE
Mastery.convert_old_to_new!( first_id, last_id )

end end

Here’s the script all ready for threading:

task :sync_mastery_scores, [:start_id, :max_id] => :environment do |_, args| ids = ( args[:start_id] .. args[:max_id] )

thread_each(n_threads: N_THREADS, ids: ids, batch_size: BATCH_SIZE) do |first_id, last_id|

Mastery.convert_old_to_new!( first_id, last_id )

end end

Notice the script has barely changed. The lines:


ids.step(BATCH_SIZE) do |first_id|
  last_id = first_id + BATCH_SIZE
  ...
end

Have been replaced with:


thread_each(n_threads: 2, ids: ids, batch_size:BATCH_SIZE) do |first_id, last_id|
  ...
end

This new thread_each function needs to divide up ids into separate zones of ids, one for each thread.

(0…n_threads).each do |thread_idx| thread_first_id = ids.first + (thread_idx * ids_per_thread) thread_last_id = thread_first_id + ids_per_thread

thread_ids = (thread_first_id...thread_last_id)

# ...
# Start a Thread and iterate through `thread_ids`
# ...

end end

We can fill in that last section by creating a Thread, and having the thread loop through its thread_ids in batches, passing each batch into the block. Just don’t forget to join all the threads at the end!

ids_per_thread = (ids.size / n_threads.to_f).ceil

(0…n_threads).each do |thread_idx| thread_first_id = ids.first + (thread_idx * ids_per_thread) thread_last_id = thread_first_id + ids_per_thread

thread_ids = (thread_first_id...thread_last_id)

threads.append Thread.new {
  puts "Thread #{thread_idx} | Starting!"

  thread_ids.step(batch_size) do |id|
    block.call(id, id + batch_size - 1)
  end

  puts "Thread #{thread_idx} | Complete!"
}

end

threads.each { |t| t.join } # wait for all the Threads to complete end

By the way, I tried a few different values of n_threads and found that 2 gave the best performance. Your mileage may vary.

So actually… this code is close to working, but it turns out there are a few problems with it.

Thread Gotchas

NameError ?

The first time I ran the threaded script, I saw all sorts of bizarre errors like:

NameError: uninitialized constant StudentTopic

This is because Rails 3.x is not thread-safe by default. (Rails 4+ is, and thankfully we’ll be upgraded to Rails 4 very soon.) There is a config setting to make Rails thread safe, but using that would be too easy a solution for this post 😉. The key is to make sure all files that you need are loaded before creating any of the threads – even files which are dependencies of the ones you need directly. In this case, Mastery requires Student. So, at the top of the thread_each function, before creating any threads, I added:

end

1 + 1 > 2 ?

When I ran it again, it seemed like everything worked great! Until exactly 50% of the way done:

New Mastery Records: |======== | 50.00% rake aborted! ActiveRecord::ConnectionTimeoutError: could not obtain a database connection within 5 seconds ... ___/active_record/connection_adapters/abstract/connection_pool.rb:258:in `block (2 levels) in checkout' . . .

We have two problems here. The first is that the connection pool only has 2 connections in it, and for some reason I need more than 2 - even though I only have 2 threads running! The truth is, I have three threads running – the two we create in thread_each, and the main thread that creates them. If the main thread grabs a connection from the connection pool, then there’s only 1 connection left in the pool for the other two threads. Oh noes!!!

(You may be thinking - Josh, in the script you’ve been showing us, the main thread doesn’t access any connections – and you’d be right. I’ve simplified things a little for the sake of this blog post. In the real script, the main thread executed a couple queries before starting up the threads.)

We could increase the size of the connection pool, it’s just a config value in database.yml. We could create our own ConnectionPool for use by this script. Or, we could have the main thread return its connection to the pool when it’s done using it. Since “releasing the connection from the main thread” is a one-line change and the change is local to the script I’m working on, that’s the option I chose. Here’s the one line, be sure to add it before creating the other threads:

ActiveRecord::Base.connection_pool.release_connection

... end

50% ?

Okay! But there’s still the question – why did the script fail at exactly 50% done? Well, actually, it sort of didn’t.

When we created the two threads, the first one attempted to execute a query using ActiveRecord, grabbed the last connection from the connection pool, and carried along its merry way. Then the second thread came right along and tried to execute a query using ActiveRecord, but failed to get a connection from the pool, and immediately failed. The thing is, that second thread didn’t tell the main thread that it failed until the main thread called join on it.

The main thread created the two threads. The first one works great, the second one fails almost immediately. Then the main thread calls join on the first thread and waits until the first thread is complete – which happens when we are exactly 50% done with the script!!! At that point, the main thread gets control back and calls join on the second thread. And that’s when the second thread finally tells the main thread that it has failed.

Well, that’s pretty frustrating! However, again, there is a simple solution. You can instruct a thread to abort on an exception right away, instead of waiting for a join. We just need to add one more line to our thread_each function right before calling join:

threads.each { |t| t.abort_on_exception = true } # make sure the whole program fails if one thread fails threads.each { |t| t.join } # wait for all the Threads to complete end

And with that, here’s the final script end to end:

task :sync_mastery_scores, [:start_id, :max_id] => :environment do |_, args| ids = ( args[:start_id] .. args[:max_id] )

thread_each(n_threads: N_THREADS, ids: ids, batch_size: BATCH_SIZE) do |first_id, last_id|

Mastery.convert_old_to_new!( first_id, last_id )

end end

def thread_each(n_threads:, ids:, batch_size:, &block) PRELOAD = [ Mastery, Student ]

ActiveRecord::Base.connection_pool.release_connection

threads = []

ids_per_thread = (ids.size / n_threads.to_f).ceil

(0…n_threads).each do |thread_idx| thread_first_id = ids.first + (thread_idx * ids_per_thread) thread_last_id = thread_first_id + ids_per_thread

thread_ids = (thread_first_id...thread_last_id)

threads.append Thread.new {
  puts "Thread #{thread_idx} | Starting!"

  thread_ids.step(batch_size) do |id|
    block.call(id, id + batch_size - 1)
  end

  puts "Thread #{thread_idx} | Complete!"
}

end

threads.each { |t| t.abort_on_exception = true } # make sure the whole program fails if one thread fails threads.each { |t| t.join } # wait for all the Threads to complete end

Caveats

I went the route of custom threading because I had a tight deadline I was trying to hit.

In general, this sort of problem/solution:

  • is not time sensitive
  • doesn’t need access to shared state

Which means it’s a prime candidate for using background jobs. If I were to use Sidekiq, I’d even get the memory efficiency benefits that I get with raw threads.

There are lots of problems that are a great fit specifically for threading. Specifically those that

  • are time sensitive OR
  • need access to shared state

For example:

  • data processing that must be run during a request, I can use threads to return results sooner
  • background job’s queue is deep and I want to get something done immediately

In Conclusion

So those are a few things that tripped me up with Ruby threading – I hope they help. In an upcoming blog post, I’m excited to go into more of the swapping-out-mastery-engines journey with you, so keep an eye out for that. If you notice anything I missed along the way, I’d love to hear about it and keep learning - please write to me and let me know. Thanks!


Josh Leven


@thejosh


Engineer at

NoRedInk

New! Sharable assignments, SAT/ACT passages, exercises on argumentation

$
0
0

We’ve had some exciting new releases in the past few weeks! Here’s a recap:

New Free Features

Sharable Assignments

Teachers can now share a link to any assignment with their departments or grade-level teams. Simply click the “…” icon next to the assignment name, then “Share with Other Teachers.”

When other teachers click the link, they’ll see a copy of the original assignment that they can then customize and adjust!

Exercises on Claims, Evidence, and Reasoning

Our new Claims, Evidence, and Reasoning pathway is available for free through the end of July! To create an assignment, go to the assignment form, and click “Writing” and “Isolated Practice.”

These exercises coach students on how to evaluate and create powerful, logical, evidence-based arguemnts.

New Premium Feature

ACT/SAT Passages

NoRedInk now offers 12 passages specifically designed to help your students prepare for the ACT and SAT! These passages include the types of errors students will be asked to correct on test day.

To assign a passage, follow these steps:

  • Click “Quiz,” then select “New Quiz.”

  • Click “Select an ACT/SAT Passage.”

  • Choose a passage to assign!

Designing for Teachers: User-driven Information Architecture

$
0
0

It’s not breaking news that teachers are using technology in their classrooms more than ever. Public schools in the US now provide at least 1 computer for every 5 students and spend more than $3 billion per year on digital content. With their already packed schedules, teachers don’t have time to figure out websites and apps that are complicated and unintuitive. A key feature determining whether using a website feels simple and easy is the site’s information architecture, or IA. IA is the underlying structure that defines how the contents of a website (or any repository of information) are classified and organized.

Good IA goes unnoticed, allowing the user to navigate the site and find what they are looking for without a second thought. Bad IA makes itself obvious, and can often be the culprit of a frustrating user experience. My local supermarket, for example, continues to baffle me in the way that its goods are organized. On a hunt for peanut butter, I see the jelly and think to myself, “I must be getting close.” But alas, it’s hiding 4 aisles down, next to the olive oil, inexplicably.

This summer at NoRedInk, the product team embarked on a project to redesign the information architecture of the teacher side of the website. We hadn’t audited the IA since its launch in 2012, and we wanted to ensure that creating an assignment and viewing student results were as easy as finding the peanut butter next to the jelly. As with everything we do, the project focused heavily on user research. We utilized a variety of methods to get to the core problems with our IA and evaluate potential solutions, resulting in a final product we think teachers will find welcoming and intuitive when they come to NoRedInk this fall.

Phase 1: Gathering and synthesizing teacher feedback related to IA

The first step was to examine where our current IA wasn’t working well. We spoke to members of our Support, Customer Success, and Partnerships teams about feedback they’ve collected from teachers regarding usage challenges on the site. These teams interact with teachers every day, responding to support emails, conducting professional development training, and giving demos of the site, and they had great insights about common navigation pitfalls on the website. For example, the Support team tracks all the emails we get from teachers about specific problems or requests. The second most common issue reported this past school year was not being able to add new students to existing assignments - a problem we knew could be fixed with better IA.

We then conducted interviews with teachers who had recently signed up for NoRedInk in order to understand which aspects of teacher functionality were easy to do right way, and which parts of the site were more likely to go unnoticed. We learned that a few key aspects of NoRedInk - the different types of assignments we offer and the ability to track students’ mastery levels - weren’t always immediately clear to teachers in their initial experiences on NoRedInk.

Phase 2: Card Sorting

Once we knew the major problems with our current IA, we started to design solutions. Instead of building off the existing model, we wanted to give ourselves the freedom to start from scratch. So we began by listing out all of the teacher-facing pages on NoRedInk and experimenting with new ways of organizing the pages. Using a method called card sorting, we had teachers do the same. Card sorting is a tool that helps uncover the way users intuitively group and categorize the pages and functions on a website. The user is presented with a long list of the website’s contents, like “Preview an assignment,” and asked to sort them into categories and give each category a name. We recruited teachers who had never used NoRedInk to avoid bias from familiarity with the current structure. The card sorting tests revealed that participants largely agreed on the overarching categories on NoRedInk: Assignments, Student Performance, Classes, Settings, and Instructional Resources. From there, we had to drill down into the finer details of where more specific functions would be found and what to name them.

Optimal Workshop, the tool we used for card sorting, analyzes the results from each participant and quantifies how frequently cards were sorted into the same category.

We took what we learned from teacher interviews, support data, and card sorting to the drawing board, and each member of the product team mapped out some new structures. We had a brainstorming meeting in which we taped hard copies of the sitemaps up on the wall and went around with stickers to mark the ideas we liked the most.

Ideas of new sitemaps from our team brainstorm.

Phase 3: Tree Testing

Our brilliant designer Ben synthesized all of these ideas into two new versions of the IA: one that was more similar to the existing site, and one that was more “class-centric” - using a teacher’s classes as jumping off points to other parts of the site. We used a method called tree testing to evaluate whether the new versions made things easier to find compared to the existing IA. In a tree test, the user is presented with a hierarchical list representing the contents of a website and several tasks; the user clicks through the list and selects the places where they think they’d be able to complete the tasks.

A screenshot of one of the tasks in the tree test. Based on the feedback we heard in our initial research, we wanted to make sure that teachers could find where to add new students.

The data we collected from tree testing included where the participants expected to complete the tasks, the paths they took to get there, and how long they spent looking. We conducted several rounds of tree testing with participants who had never used NoRedInk before. After each round of testing we made changes to address places where participants were still having trouble. Sometimes we simply renamed a feature, like changing “Student Leaderboard” to “Top Performers.” Other times we changed the location of a feature, or added another way to navigate to it. All in all, we tested 7 different iterations until we came to a version that nearly all participants completed correctly and quickly.

Phase 4: New IA! Final Design and Validation

Ben transformed the final version of the IA we developed during tree testing into a beautiful new design for the teacher side of NoRedInk. The updated layout features a new menu bar with some renamed pages. “Lessons”, for example, became “Curriculum,” a clearinghouse for our scaffolded pathways, lessons, and tutorials designed to address a pain point we frequently encountered during our research: many teachers weren’t aware of the full breadth of curriculum available to them on NoRedInk. We also added a prominent sidebar menu where teachers can manage their class settings, including student rosters. The biggest change in the new IA is the class-centric teacher dashboard, where teachers can view their classes, see what’s upcoming for the week and how students are progressing on assignments. We knew from our research that those were things that teachers want to see right away, and we organized them front and center so teachers can jump quickly into assignments or student data being better informed about the current state of their classes.

To validate the new design, we tested a working prototype to see whether the real layout, compared to the more artificial layout in the tree test, was still just as easy to navigate. We tested with new users who had very little experience on the site and with NoRedInk Ambassadors, who use the site regularly. The feedback we got from both groups was hugely positive, with multiple teachers using the word “streamlined” - exactly what we were going for.

Our current dashboard (left) and the new design, not yet in production (right).

What we learned

Looking back, the most important source of information was teacher feedback, via the Support and Customer Success teams and directly through interviews. That feedback heavily influenced the solutions we designed, and tree testing was a great tool to fine-tune and validate them. Card sorting, though a common and logical place to start when it comes to IA, didn’t tell us much beyond what we already knew. A better way to start might have been to brainstorm creative ways of getting teacher feedback related to IA that eventually drove our final solution. We’re really excited to release this more straightforward, user-friendly IA to teachers this fall!

At NoRedInk, our product team is deeply user-driven, and we are consistently pushing ourselves to find even better ways of getting feedback from teachers and students. If you’re passionate about building a product that teachers really want, our team is hiring— we’d love to hear from you!

Christa Simone is a User Researcher at NoRedInk, leveraging research and data to help build a product that teachers love.

Decoding Decoders

$
0
0

Introduction

This post is written for an Elm-y audience, but might be of interest to other developers too. We’re diving into defining clear application boundaries, so if you’re a believer in miscellaneous middleware and think DRY principles sometimes lead people astray, you may enjoy reading.

Obviously-correct decoders can play a primary role in supporting a changing backend API. Writing very simple decoders pushes transformations on incoming data into a separate function, creating a boundary between backend and frontend representations of the data. This boundary makes it possible to modify server data and Elm application modeling independently.

Decoders

In Elm, Decoders & Encoders provide the way to translate into and from Elm values. Elm is type safe, and it achieves this safety in a dynamic world by strictly defining one-to-one JSON translations.

An example inspired by NoRedInk’s Writing platform follows. We ask students to highlight the claim, evidence, and reasoning of a paragraph in exercises, in their peers’ work, and in their own writing; we need to be able to encode, persist, and decode the highlighting work that students submit.


import Json.Decode exposing (..)
import Json.Decode.Pipeline exposing (..) -- This is the package NoRedInk/elm-decode-pipeline


{-| HighlightedText describes the "shape" of the data we're producing.

`HighlightedText` is also a constructor. We can make a HighlightedText-type record by
giving HighlightedText a Maybe String followed by a String--this is actually how decoders work
and the reason that decoding is order-dependent.
-}
type alias HighlightedText =
    { highlighted : Maybe String
    , text : String
    }

{-| This decoder can be used to translate from JSON,
like {"highlighted": "Claim", "text": "Some highlighted content.."},
into Elm values:

    { highlighted = Just "Claim",
    , text = "Some highlighted content..."
    }
-}
decodeHighlightedText : Decoder HighlightedText
decodeHighlightedText =
    decode HighlightedText
        |> required "highlighted" (nullable string)
        |> required "text" string

How do we create our model?

We’ve now decoded our incoming data but we haven’t decided yet how it’s going to live in our model. How do we turn this data into a model?

If we directly use the JSON representation of our data in our model then we’re losing out on the opportunity to think about the best design of our model. Carefully designing your model has some clear advantages: you can make impossible states impossible, prevent bugs, and reduce your test burden.

Suppose, for instance, that we want to leverage the type system as we display what is/isn’t highlighted. Specifically, there are three possible kinds of highlighting: we might highlight the “Claim”, the “Evidence”, or the “Reasoning” of a particular piece of writing. Here’s our desired modeling:


type alias Model =
    { writing : List Chunk
    }


type Chunk
    = Claim String
    | Evidence String
    | Reasoning String
    | Plain String

So now that we’ve carefully designed our Model, why don’t we decode straight into it? Let’s try to write a single combined decoder/initializer for this and see what happens.


import Model exposing (Chunk(..), Model)
import Json.Decode exposing (..)
import Json.Decode.Pipeline exposing (..)


decoder : Decoder Model
decoder =
    decode Model
        |> required "highlightedWriting" (list decodeChunk)


decodeChunk : Decoder Chunk
decodeChunk =
    let
        asResult : Maybe String -> String -> Decoder Chunk
        asResult highlighted value =
            toChunkConstructor highlighted value
    in
        decode asResult
            |> required "highlighted" (nullable string)
            |> required "text" string
            |> resolve


toChunkConstructor : Maybe String -> String -> Decoder Chunk
toChunkConstructor maybeString text =
    case maybeString of
        Just "Claim" ->
            succeed 
            succeed 
            succeed 
            succeed 
            fail 

The decodeChunk logic isn’t terrible right now, but the possibility for future hard-to-maintain complexity is certainly there. The model we’re working with has a single field, and the highlighted data itself is simple. What happens if we have another data set that we want to use in conjunction with the highlighted text? Maybe we have a list of students with ids and the highlights may have been done by different students, and we want to combine the highlights with the students… It’s not impossible, but it’s not as straightforward as we might want.

So let’s try a different strategy and do as little work as possible in our decoders. Instead of decoding straight into our Model we’ll decode into a type that resembles the original JSON as closely as possible, a type which at NoRedInk we usually call Flags.


import Json.Decode exposing (..)
import Json.Decode.Pipeline exposing (..)


type alias Flags =
    { highlightedWriting : List HighlightedText
    }


decoder : Decoder Flags
decoder =
    decode Flags
        |> required "highlightedWriting" (list decodeHighlightedText)


type alias HighlightedText =
    { highlighted : Maybe String
    , text : String
    }


decodeHighlightedText : Decoder HighlightedText
decodeHighlightedText =
    decode HighlightedText
        |> required "highlighted" (nullable string)
        |> required "text" string

Note that HighlightedText should only be used as a “Flags” concept. There might be other places in the code that need a similar type but we’ll create a separate alias in those places. This enforces the boundary between the Flags module and the rest of the application: sometimes it’s tempting to “DRY” up code by keeping type aliases in common across files, but this becomes confusing because it ties together modules that have nothing to do with one another if the data that we’re describing differs in purpose. Internal Flags types ought to describe the shape of the JSON. Type aliases used in the Model ought to be the best representation available for application state. Conflating the types that represent these two distinct ideas may eliminate code, but also eliminates some clarity.

We’re not home yet. We now have a Flags type but we’d really like a Model. Let’s write an initializer to bridge that divide.


import Json.Decode exposing (..)
import Json.Decode.Pipeline exposing (..)


{- FLAGS -}

type alias Flags =
    { highlightedWriting : List HighlightedText
    }


decoder : Decoder Flags
decoder =
    decode Flags
        |> required "highlightedWriting" (list decodeHighlightedText)


type alias HighlightedText =
    { highlighted : Maybe String
    , text : String
    }


decodeHighlightedText : Decoder HighlightedText
decodeHighlightedText =
    decode HighlightedText
        |> required "highlighted" (nullable string)
        |> required "text" string


{- MODEL -}

type alias Model =
    { writing : List Chunk
    }


type Chunk
    = Claim String
    | Evidence String
    | Reasoning String
    | Plain String


{- CREATING A MODEL -}


init : Flags -> Model
init flags =
    { writing = List.map initChunk flags.highlightedWriting
    }


initChunk : HighlightedText -> Chunk
initChunk { highlighted, text } =
    text
        |> case highlighted of
            Just "Claim" ->
                Claim

            Just "Evidence" ->
                Evidence

            Just "Reasoning" ->
                Reasoning

            Just otherString ->
                -- For now, let's default to Plain
                Plain

            Nothing ->
                Plain

We’re still doing the same transformation as before but it’s easier to trace data through the initialization path now: We decode JSON to Flags using a very simple decoder and then Flags to Model using an init function with a type that actually shows what transformation is happening. Plus, as we’ll see in the next section, we have more control and flexibility in how we handle the boundary of our Elm application!

Leveraging Decoders

The example code we’ve been using involves modeling a paragraph with three different kinds of highlights. This example is actually motivated by a piece of NoRedInk’s Writing product, in which students highlight the component parts of their own writing. Earlier this year, students were only ever asked to highlight the Claim, Evidence, and Reasoning of paragraph-length submissions. This quarter, we’ve worked to expand that functionality in order to support exercises on writing and recognizing good transitions; on embedding evidence; on identifying speaker, listener, and plot context; and more. But uh-oh–our Writing system assumed that we’d only ever be highlighting the Claim, Evidence, and Reasoning of a paragraph! We’d been storing JSON blobs with strings like “claim” in them as our writing samples!

So what did this mean for us?

  1. We needed to store our JSON blobs in a new format–the existing format was too tightly-tied to Claim, Evidence, and Reasoning
  2. We needed to migrate our existing JSON blobs to the new format
  3. We needed to support reading both formats at the same time

In a world where the frontend application has a strict edge between JSON values and Elm values and a strict edge between Elm values and the Model, this is straightforward.


import Json.Decode exposing (..)
import Json.Decode.Pipeline exposing (..)


type alias Flags =
    { highlightedWriting : List HighlightedText
    }


{-| This decoder supports the old and the new formats.
-}
decoder : Decoder Flags
decoder =
    decode Flags
        |> custom (oneOf [ paragraphContent, deprecatedParagraphContent ])


type alias HighlightedText =
    { highlighted : Maybe String
    , text : String
    }


paragraphContent : Decoder (List HighlightedText)
paragraphContent =
    {- We've skipped including the actual decoder in order to emphasize
       that we are easily supporting two radically different JSON blob
       formats--it doesn't actually matter what the internals of those blobs are!
    -}
    field "newVersionOfHighlightedWriting" (succeed [])


deprecatedParagraphContent : Decoder (List HighlightedText)
deprecatedParagraphContent =
    field "highlightedWriting" (list deprecatedDecodeHighlightedText)


deprecatedDecodeHighlightedText : Decoder HighlightedText
deprecatedDecodeHighlightedText =
    decode HighlightedText
        |> required "highlighted" (nullable string)
        |> required "text" string

Conclusion

As we’ve seen, it’s easier to reason about data when each transformation of the data is done independently, and using decoders well can help us handle the intermediate modeling moments that are common in software development.

We hope that you’re interested in how NoRedInk’s Writing platform works: We’ve loved working on it and we hope you’ll ask us about it! We’ve gotten to work with some really cool tools and to try out cool architectural patterns (hiii event log strategy with Elm), all while building a pedagogically sound product of which we’re proud. In the meantime, may your modules have clean APIs, your editor run elm-format on save, and your internet be fast.


Tessa Kelly
@t_kelly9
Engineer at NoRedInk


Jasper Woudenberg
@jasperwoudnberg
Engineer at NoRedInk


New! Updated Assignment Form, New Pre-made Diagnostics, and Easier Class Management

$
0
0

Welcome to the 2017-2018 school year! We’ve made some big updates this summer.

New Free Features

Assignment Form

We’ve streamlined the assignment creation process into 3 core steps: pick the type of assignment, select the content, and handle the logistics. Our simplified form makes it faster to get work to your students!

Pre-made Diagnostics

Not sure where to start? Try one of our premade planning diagnostics! The diagnostics select standards-aligned, grade-level appropriate content to get your students started. Once you have student data, you can decide what to teach next.

Here are sample diagnostics for grades 4-6, grades 7-9, and grades 10-12. You can browse our full library of pre-made diagnsotics, including diagnostics specifically aligned to state assessments, at this link.

Class Management

Our new class management page is your central hub for controlling your courses and rosters.

New! Interactive lessons, view data as you assign, and reuse past assignments

$
0
0

We’re excited to announce more back-to-school updates to help support you and your students this year!

New Free Features

Interactive Lessons

We’ve rolled out our first batch of interactive lessons, which will introduce students to concepts prior to the start of practice. These lessons include friendly visuals, guided instruction, and targeted tips to set students up for success!

To try out a tutorial, go to your Curriculum page, scroll to “Who and Whom,” and then click “Practice this!”

View Data in the Assignment Form

Have student performance data at your fingertips as you create assignments. In the assignment form, expand your student roster to see up-to-date mastery and assessment data. Leverage this data to differentiate assignments for individual students or groups of students.

Reuse Past Assignments

Have assignments from last school year that you loved? On your assignments page, select to view “My archived classes.” You’ll then have the option to share or reuse work from prior classes. Learn more.

New! Curriculum and updates to gradebook, assignments page, and site colors!

$
0
0

We’ve done some cleanup and adjustments to make NoRedInk even easier to use!

New Premium Features

New Exercises on Transitions and Embedding Evidence

We’ve released new pathways focused on “Transition Words and Phrases” and “Avoiding Plagiarism and Using Citations.” Students can develop skills around producing a logical flow of ideas, as well as skills related to paraphrasing, citation, and plagiarism detection.

All topics are available as part of NoRedInk Writing! Free teachers can also try out a topic in each pathway.

New Free Features

Updated Gradebook

Our new gradebook is easier to scan, sort, and export! Learn about the full update here.

Updated Assignments Page

Quickly scan your in-progress, past-due, and upcoming assignments. Take advantage of our prompts to create growth quizzes or other new assignments for your students.

Updated Colors

Our colors got a facelift! We heard from teachers and students that our use of purple during level 1 of mastery could be discouraging or confusing – we’ve updated the colors to be brighter, friendlier, and clearer for your students.

New! “Create a unit” and improved search

$
0
0

Create a Unit

Quickly and easily build a unit of assignments! Start with a Unit Diagnostic and then add on a Practice and a Growth Quiz with a single click. This is a great way to track student growth and ensure skill development.

You’ll see the “create unit” button on your assignments page. You can also check out this Help Center article article for more information!

Improved Search

We’ve improved the searchability of our assignment form to make it easier for teachers to find what they’re looking for!

The Most Object-Oriented Language

$
0
0

I’ve been listening to the Elixir Fountain podcast. It’s a pretty great source of interviews with people who have played some part in the rise of the Elixir programming language.

Episode 19 was particularly interesting to me. It’s an interview with Joe Armstrong, co-creator of Erlang. The discussion includes a lot of details about the platform Elixir is built on top of.

There’s one moment in the episode that surprised me. Joe makes a comment about Erlang being “the most object-oriented language.” The same can be said of Elixir.

Wait, what?

Does Elixir even have objects? No. Isn’t Elixir a functional programming language? Yes.

When I first picked up Elixir, I heard some developers comment that it was an easy transition from Ruby because it was similar in some ways. Early on though, I decided I didn’t agree with that. The syntax was Ruby inspired, sure, but with immutability, pattern matching, and recursion everywhere, it felt like a significantly different beast to me.

Then I started to dig into processes more. As I got deeper, Joe’s comment from the podcast culminated in a significant “Ah ha” moment for me. Let’s see if we can gain a deeper understanding of what he meant.

What Is Object-Orientation?

This is a pretty challenging question for developers. We don’t often seem to agree on exactly how to define this style of programming.

The first book I ever read about object-oriented programming explained it from the point of view of C++ and it drilled into me that it was a combination of three traits: encapsulation, inheritance, and polymorphism. That definition has drawn increasing criticism over time.

A likely better source of definitions for the term is the man who coined it, Dr. Alan Kay. There have been attempts to catalog what he has said about object-orientation. My favorite definitions come from an email he wrote to answer this question in 2003. It talks about his original conception, which includes this description:

I thought of objects being like biological cells and/or individual computers on a network, only able to communicate with messages (so messaging came at the very beginning – it took a while to see how to do messaging in a programming language efficiently enough to be useful).

The email culminates in this definition:

OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things.

You may have noticed that these definitions never mention things like classes or instances. Wikipedia talks a lot about such things when defining object-oriented programming but I view that more as a description of where a lot of languages have gone with the concept. The heart of object-oriented programming doesn’t seem to require such bits, at least according to the man who named the style.

OK, But Where Are the “Objects?”

Playing with the semantics of these definitions is fun and all, but it does seem like we would at least need some objects for this idea of object-oriented programming to apply to us. Elixir doesn’t strictly have anything called objects, but let’s use a quick example to look at what it does have.

We’ll create a script that fetches a page off the web, crudely parses out the title, and prints it. This doesn’t have any practical value. The whole point is to give use some code to examine, in Elixir’s normal context.

We’ll begin by creating the project as normal:

$ mix new --sup title_fetcher
* creating README.md
* creating .gitignore
* creating mix.exs
* creating config
* creating config/config.exs
* creating lib
* creating lib/title_fetcher.ex
* creating test
* creating test/test_helper.exs
* creating test/title_fetcher_test.exs

Your Mix project was created successfully.
You can use "mix" to compile it, test it, and more:

    cd title_fetcher
    mix test

Run "mix help" for more commands.

The --sup switch that I added tells Mix to go ahead and give me a standard Supervisor and prepare my application for running by starting said Supervisor. This just gets us going with less effort.

Now, we need a way to fetch the web page so let’s add the HTTPoison library as a dependency of our project. To do that, we open mix.exs and make two small tweaks:

defmodule TitleFetcher.Mixfile do
  # ...

  defp deps do
    [{:httpoison, "~> 0.8.1"}]
  end
end

This first change just adds the dependency.

defmodule TitleFetcher.Mixfile do
  # ...

  def application do
    [applications: [:logger, :httpoison],
     mod: {TitleFetcher, []}]
  end

  # ...
end

The other change was to add :httpoison to the list of applications that Elixir will start up for us. An application in this context means something different from the normal computer term. It helps me to think of it as a reusable component.

With those changes in place, we can ask Mix to fetch our dependencies:

$ mix deps.get
Running dependency resolution
Dependency resolution completed
  certifi: 0.3.0
  hackney: 1.4.10
  httpoison: 0.8.1
  idna: 1.1.0
  mimerl: 1.0.2
  ssl_verify_hostname: 1.0.5
* Getting httpoison (Hex package)
Checking package     (https://s3.amazonaws.com/s3.hex.pm/tarballs/httpoison-0.8.1.tar)
Using locally cached package
* Getting hackney (Hex package)
Checking package     (https://s3.amazonaws.com/s3.hex.pm/tarballs/hackney-1.4.10.tar)
Using locally cached package
...

We’re ready to add our little bit of code. This time we need to open lib/title_fetcher.ex. It contains a commented out line inside the start() function that looks like this:

      # worker(TitleFetcher.Worker, [arg1, arg2, arg3]),

We need to change that to run our code:

      worker(Task, [&fetch_title/0]),

A Task is a built-in tool for wrapping some code in an Elixir process. Here we’ve instructed it to call the fetch_title() function. That’s the only other bit that we need to add:

defmodule TitleFetcher do
  # ...

  defp fetch_title do
    body = HTTPoison.get!("https://www.noredink.com/") |> Map.get(:body)
    Regex.run(~r{<title>([^}, body, capture: :all_but_first)
    |> hd
    |> IO.puts

    System.halt
  end
end

This is the code for what we set out to do. The first pipeline retrieves the contents of NoRedInk’s homepage. The second extracts and prints the title of that page. Then there’s a call to System.halt() to shut everything down.

The end result is probably what you expect to see:

$ mix run --no-halt
Compiled lib/title_fetcher.ex
NoRedInk makes learning grammar fun and easy

The result isn’t what I wanted to show you though. Let’s make this code show us a little about how it did the work. First, we know we had the Task process managing what we were telling it to do, but let’s ask if there were other processes doing stuff:

defmodule TitleFetcher do
  # ...

  defp fetch_title do
    Process.registered |> Enum.sort |> IO.inspect

    body = HTTPoison.get!("https://www.noredink.com/") |> Map.get(:body)
    Regex.run(~r{<title>([^}, body, capture: :all_but_first)
    |> hd
    |> IO.puts

    System.halt
  end
end

There’s just one new line at the beginning of the function. All it does is print out all registered processes for us to inspect. Here’s the updated output:

$ mix run --no-halt
Compiled lib/title_fetcher.ex
[Hex.Registry.ETS, Hex.State, Hex.Supervisor, Logger, Logger.Supervisor,
 Logger.Watcher, Mix.ProjectStack, Mix.State, Mix.Supervisor, Mix.TasksServer,
 TitleFetcher.Supervisor, :application_controller, :code_server,
 :disk_log_server, :disk_log_sup, :elixir_code_server, :elixir_config,
 :elixir_counter, :elixir_sup, :erl_prim_loader, :error_logger, :file_server_2,
 :ftp_sup, :global_group, :global_name_server, :hackney_manager, :hackney_sup,
 :hex_fetcher, :httpc_handler_sup, :httpc_hex, :httpc_manager,
 :httpc_profile_sup, :httpc_sup, :httpd_sup, :inet_db, :inets_sup, :init,
 :kernel_safe_sup, :kernel_sup, :rex, :ssl_listen_tracker_sup, :ssl_manager,
 :ssl_sup, :standard_error, :standard_error_sup, :tftp_sup, :tls_connection_sup,
 :user]
NoRedInk makes learning grammar fun and easy

Wow, there’s kind of a lot going on in there! Most of what we see are things that Elixir setup to get things ready for our code to run.

You may notice that the list doesn’t include any mention of our HTTPoison dependency. That’s because HTTPoison is a thin Elixirifying wrapper over an Erlang HTTP client library called hackney. If you reexamine the list above you will find a couple of hackney processes, including :hackney_manager.

We’ve reached the key idea of how Elixir projects work.

Inside our Task, we called a function to fetch the page content: HTTPoison.get!(). That executed in our process. But we now have strong reason to believe that a separate hackney process is what actually did the fetching over the wires. If that’s true, these processes must have sent some messages to each other under the hood of these seemingly simple function calls.

Let’s make another debugging change to confirm our hunch:

defmodule TitleFetcher do
  # ...

  defp fetch_title do
    :sys.trace(:hackney_manager, true)

    body = HTTPoison.get!("https://www.noredink.com/") |> Map.get(:body)
    Regex.run(~r{<title>([^}, body, capture: :all_but_first)
    |> hd
    |> IO.puts

    System.halt
  end
end

This time the line added turns on a debugging feature of the OTP (the framework our code is running on top of). We’ve asked for a message trace() of that :hackney_manager process that we found earlier. Observe what that reveals:

$ mix run --no-halt
Compiled lib/title_fetcher.ex
*DBG* hackney_manager got call {new_request,,#Ref,
                                   {client,undefined,hackney_dummy_metrics,
                                       hackney_ssl_transport,
                                       "www.noredink.com",443,
                                       >,[],nil,nil,nil,
                                       true,hackney_pool,5000,false,5,false,5,
                                       nil,nil,nil,undefined,start,nil,normal,
                                       false,false,false,undefined,false,nil,
                                       waiting,nil,4096,>,[],undefined,nil,
                                       nil,nil,nil,undefined,nil}} from     
*DBG* hackney_manager sent {ok,{1457,541907,594514}} to , new state     {...}
NoRedInk makes learning grammar fun and easy
*DBG* hackney_manager got cast {cancel_request,#Ref}

Bingo. The :hackney_manager process did indeed receive a message asking it to grab the web page we wanted to load. You don’t see it here, but hackney eventually sent a message back to the requester containing the page content.

This is how Elixir works. Concerns are divided up into processes that communicate by sending messages to each other. These processes are the “objects” of this world. They are what we should be judging to evaluate Elixir’s object-oriented merits.

Get to the Score Already!

We can assess how Elixir stacks up as an object-oriented language point by point using all of the definitions mentioned earlier. First up is the idea of “biological cells… only able to communicate with messages.” I hope the example above has shown that this is how you get things done in Elixir. In several other languages, objects are the recommended design tool, but in Elixir processes are essential.

What about the idea of “local retention and protection and hiding of state-process,” which we often call encapsulation? To put it bluntly, it’s enforced. Processes share nothing. Outside of the debugging tools, this is just the law of the land in Elixir.

Inheritance? No. Elixir doesn’t have this concept. It’s possible to build your own version on top of processes, but nothing is provided. This might be another plus.

There’s one last point: “extreme late-binding of all things.” We have tried to shorten this idea to polymorphism, but that’s not actually the same thing. Let’s sort this out with another example.

We’re just going to play around in Elixir’s REPL (iex) this time. Let’s ask Elixir to print something to the screen:

iex(1)> IO.puts("Printing to the terminal.")
Printing to the terminal.
:ok

That looks pretty straightforward, but we want to know how it works.

IO.puts() takes another argument that we got a default for above. Let’s add that in:

iex(2)> IO.puts(Process.group_leader, "More explicit.")
More explicit.
:ok

This version says that we want to send the output to the group_leader() of our process. That’s the same thing the default did for us in the earlier example. In the case of our REPL session, the group_leader() is essentially pointing at STDOUT. But what does group_leader() really return, in terms of Elixir data types?

iex(3)> pid = Process.group_leader
#PID
iex(4)> IO.puts(pid, "It's processes all the way down!")
It's processes all the way down!
:ok

It you’ve been paying attention, I suspect you’re not at all surprised to see that this is another process. Why is that the case though, really?

The phrase “extreme late-binding of all things” is a way of saying something like, “I want the computer to sort out what code to actually run as the need arises.” In some language that might mean two different kind of objects could have print() methods and the computer decides which one to invoke based on which object the message is sent to.

In Elixir, this kind of late dispatch is simply sending messages to some kind of process that understands them. If we swap out that process with another one that understands the same messages, we can change what code is run. For example, we could capture the output from IO.puts():

iex(5)> {:ok, fake_terminal} = StringIO.open("")
{:ok, #PID}
iex(6)> IO.puts(fake_terminal, "Where does this go?")
:ok
iex(7)> {inputs, outputs} = StringIO.contents(fake_terminal)
{"", "Where does this go?\n"}
iex(8)> outputs
"Where does this go?\n"

StringIO.open() creates a new process that pretends to be an I/O device. It understands the messages they receive and reply with, but it can substitute input from a string and capture output written into a separate string. This is possible because Elixir doesn’t care how the process works internally. It just delivers the messages to it. What code to run is sorted out inside the process itself as it receives the messages.

For a language we don’t traditionally think of as object-oriented, Elixir scores surprisingly well against a few different definitions. Processes make darn good objects!

Does This Matter?

Is it actually significant which language we decide to crown as “The Most Object-Oriented?” Of course not. It can be interesting to consider the parallels though.

If you shift your focus just right when looking at Elixir code, you can see the objects pop out. They’re a little different in this world. However, and this is the neat bit if you ask me, your object-oriented design skills (assuming you studied the good stuff, like Practical Object-Oriented Design in Ruby) still very much apply. I didn’t expect that and it’s a nice surprise.


James Edward Gray II
@JEG2
Engineer at NoRedInk

On-boarding as a New Remote Engineer

$
0
0

Think about your on-boarding process. I don’t mean the part where HR or an office manager gives out paperwork. I mean the part where the new hire does actual engineering work. Do you have a fellow engineer pairing with the new hire through the process? Do you have a checklist to let the new hire know the scope of on-boarding and when they are finished? Do you ask the new hire to join in on the conversation to continuously improve the on-boarding process afterward? Do you set expectations for the first few weeks? If you didn’t answer positively to all of those questions, your on-boarding process can be improved.

After working for a few different startups over the years, and working in two more completely different industries, I have been through a wide variety of on-boarding processes. Some were intricate. Some were non-existent. But all were about me getting up to speed on my own. I could ask for help with things here and there at each place. However, the general vibe was that you needed to be up to speed as fast as possible for the sake of the company.

The First Day

Things are different at NoRedInk. My on-boarding experience has been less focused on what I personally need to do to get up to speed. It has been more focused on what we do as a whole and how I contribute to this process.

Rather than telling me what to do and seeing if I can do it, the first day was all pairing with another engineer. We ran through the administrative work of setting up accounts for services and credentials together. Doing the administrative work together was great because I had the chance to ask questions about what the different services did. More importantly, we had a chance to discuss why we chose each service and how we generally use them in day to day engineering.

We did actual engineering work next. My pair walked me through the steps of choosing an available issue, style conventions, and how to finish up on github. All of this information is overwhelming to take in on the first day if you are by yourself. To help alleviate some of this cognitive load, we have to talk about one of the most amazing things about being at NoRedInk: wikis!

Wikis!

We have just about everything documented somewhere in a wiki. Imagine your employee handbook but better. There are five main wikis: Engineering, BizOps, Sales, Product, and Non-Technical. If you are curious about how sales does their demos, you can find an entry that details the whole process. If you want to know how content gets created on the site, there is an entry in the Non-Technical wiki. Practically anything you can think of is in a wiki somewhere! But wikis are worthless if the information becomes stagnant. So we constantly update entries and make changes to keep the information current.

Let’s focus on the engineering wiki. The first thing to know is that even though it is an engineering wiki, the information is mostly about process rather than algorithms. Some examples are: “The Best Practices for Discussions”, “The Communication Policy for Email, Github, Slack”, and “The Engineer Onboarding Checklist”. Yep, we keep the checklist of things to do during on-boarding in our wiki! To keep the checklist current, the last item is to update the checklist with any information that might have changed.

Our technical stuff is more like what you would expect in an engineering wiki: best practices for front-end, style guides, domain information. With the information in the wiki and out of the heads of individuals, it is much easier to ensure everyone understands what the process currently is. We also can ensure we are following an up to date process. If we decide to make a slight change to our pull request process, it can happen in the wiki and everyone can be aware of it.

The biggest benefit I get from having so much information written down is being able to understand it at my own pace. There are some familiar bits that I get the first time around. Then there are some unfamiliar bits that I need to read multiple times to really understand. In day to day work, it means that I don’t have to bother someone else when I have a simple question. I can find the information in the wiki and answer my own question. As a remote worker, it is a great feeling to know that you can get most of the information you need at any time.

Remote Culture

The pairing doesn’t stop with on-boarding. Every day that I have been at work, I have paired with someone on something engineering related. About 80% of the time it is technical work, but sometimes we do pair on non-technical work. Pairing can be a quick 15 minute session to work through a bug, or it can be a full 8 hours if you (and your pair) want. I have paired with people in the office and my fellow remote workers.

One of the big issues with working remote is losing the connection with your co-workers. NoRedInk is the most remote friendly place I have worked. Co-located people will often ask remote people if they would like to pair, and vice versa. If a discussion happens in-office that concerns multiple people, it will usually move to email, github, or slack. The purpose is to keep everybody in the loop, whether they are working in-office or remotely.

Much of our engineering process is centered around asynchronous communication. This is a huge win for working remotely! The slack channels are always abuzz with information and ideas. But there are times when a face-to-face call makes more sense than typing. In these cases, we will throw up a quick five minute hangout, fire-up screenhero or even use a regular old telephone. Once we hash out the details, we report back in the channel on what the conclusion is. There’s no sense in forcing asynchronous communication if a synchronous call is more effective. Again, the focus is on communicating effectively and keeping everyone connected.

To make sure the different teams stay connected, we have a weekly team lunch. The in-office folk gather in a hangout with the remote folk and someone presents on a topic. There is a projector in office (as most offices might have) so people don’t have to crowd on a single screen. We have presented on what the engineering team is doing, how the process is changing within the product team, what is going on with sales, and less company related stuff like OSX productivity tips.

We have expanded so much as a team lately that during our weekly lunches the external in-office microphone couldn’t keep up. We had so many people around the lunch table that it was hard for the remote people to hear the conversation. To make audio better for us joining remotely, a second external microphone was bought. We can hear much better now when someone talks at the far end of the table. Little things like adding an additional microphone go a long way to making remote people feel like they are part of the office.

One of Us

There is quite a bit more that is great about working at NoRedInk. Rather than reading more in this blog post, you should come experience it for yourself. We are always looking for great people to join and help us make the team even better. Check out the jobs page to see if something fits. Or if you have tips on where we can improve our process, I’d love to hear about it. Send me a tweet at @st58 with your suggestions.

Hardy Jones
@st58
Engineer at NoRedInk

Our Engineering Hiring Process

$
0
0

Hiring is something that we care deeply about at NoRedInk. We aim to have a process that’s reliable and efficient, and that provides a positive and respectful experience for the candidate.

If you’re thinking about applying, this should help you learn what to expect from each step of the process.

The Interview Steps

  • Application
  • Take Home Challenge
  • Conversation with Engineer
  • Technical Interviews
  • Conversation with Director or CEO
  • Lunch with Team

Application

You can find details about the engineering positions we’re hiring for on our jobs page.

For each position, the expertise and qualities we’re looking for are described in its job post. At this initial stage, we’re looking to see whether your resume and cover letter match the requirements of the job you’re applying for.

This is also a great opportunity for you to tell us a little about yourself and why you want to join NoRedInk. Although a cover letter isn’t required, it’s definitely a nice way for us to learn a bit more about you.

Take Home Challenge

Our take home challenge has a two-hour time limit and can be taken on your schedule.

We’ve built a simple web app to administer all take home submissions. This allows you to log in and start the challenge at any time. When you submit it, your solution is sent to one of our engineers to be evaluated. These evaluations are “blind” in that the evaluator doesn’t have access to any personally identifiable information about who they’re evaluating, such as name or resume.

There are two reasons for the two-hour window over an unlimited window for working on the challenge:

  • It helps us standardize solutions across all candidates applying for the same position, which helps us make fair evaluations.
  • It helps us be respectful of the candidate’s time.

We recommend using the language and, if necessary, framework you’re most familiar with. We can almost surely find an engineer on our team that’s skilled enough at it to evaluate your solution fairly.

Conversation with an Engineer

The conversation is scheduled for thirty minutes. We’ll ask you non-technical questions so that we can learn more about you as a candidate and how you’d fit as an employee.

You’ll also have time to ask us questions about your interviewer, the company, or anything else you want to learn about us. If you can, prepare your questions beforehand.

Technical Interviews

We do three technical interviews. Each one lasts for one hour and thirty minutes, plus an additional thirty minutes that you can use to ask questions.

The first technical interview is done remotely. The two following interviews are contingent on the first and happen on a different day. These are on-site for candidates applying to work from our SF offices and remote otherwise.

We never ask you to write code on a whiteboard, because we’re not hiring you to write code on a whiteboard. We ask you to bring your own laptop, with your development environment of choice, to use for 100% of the technical interview process. We want to understand what it’d be like to work with you, so the closer your environment is to what you’d normally use on the job, the better.

We also encourage you to use the same tools in the interview as you would on the job. Please Google whatever you like; it’d be weird to pretend like you didn’t have an Internet connection when you clearly would on the job. Restricting your normal toolbox would give us a less accurate picture of what it’d be like to work with you.

Our goal is to test skills you use on a daily basis, as opposed to computer science trivia you wouldn’t actually use on the job. Still, we recognize that you may not be used to solving problems from start to finish in such a limited timeframe. A good way to get comfortable with time limits is to do some problems on Exercism.io with a self-imposed time limit of an hour. James Gray gave a talk at Railsconf 2016 with other tips on interview preparation.

Content-wise we’ll test the technologies and skills you’ll use in your day-to-day work. For example, if you’re applying for a Front-End job, we’ll want you to demonstrate experience with JavaScript, HTML and CSS.

Conversation with Director or CEO

This is the final interview and it’s also non-technical. It lasts for two hours with one hour and thirty minutes of interview and thirty minutes for your questions. We’ll want to learn about details from your work experience and what makes you a great engineer.

This is also an excellent time to ask questions about the company’s strategic plans, vision, and values, but you are free to ask any questions you like.

Lunch with Team

On the same day as your final interview, you’ll have lunch with the team so you can get to know your potential coworkers. This is not an interview, but just an opportunity for you to get to know us better. There’s no evaluation for this step on our end.

Timeline

We pride ourselves on moving extremely quickly with our interviewing process, but a lot of the speed depends on your availability. Interviewing takes priority over all other work here at NoRedInk, and we can usually accommodate interviews at a day’s notice.

In practice, it takes two to four weeks from your application until the offer. We evaluate most resumes and take home challenges within 24 hours and we’ll be available for your first technical interview within 48 hours.

Conclusion

Creating a good interview process is hard, especially as a startup.

Increasing the transparency of our hiring process is important not only so that you know what to expect, but also so we can make our process better.

We hope learning about our process makes you feel more prepared and confident. If you have any questions about our process or the company, please feel free to get in touch.

Ready to apply? Apply now on our jobs page!


Marcos Toledo
@mtoledo
Director of Engineering at NoRedInk


Writing Friendly Elm Code

$
0
0

So you’ve chosen to use an elegant language designed with the programmer in mind. You’ve followed the Elm Architecture tutorial, so you understand how to structure an application. You’ve written some Elm code. Maybe you’ve even read other people’s Elm code, or taken over someone else’s Elm app. Whatever your level of experience in Elm, and despite the work the language puts in to keep your code readable and tidy, it’s possible to write some deeply unfriendly Elm code.

So why write “friendly” code? For us, the first people we have in mind when we are writing code are the students and the teachers who use NoRedInk. But we’re also writing code for each other–for other engineers–rather than writing code for just ourselves. Writing readable code can be hard, especially since not everyone agrees what is or is not readable. On my team, there are different preferences as to whether doc comments, type signatures, or descriptive naming are most important when encountering a new chunk of code. I don’t want to make an argument there (#descriptive_naming), but over the course of working with and writing Elm components of various sizes and complexities, I’ve found some general guidelines to be helpful.

I recommend that Elm programmers don’t be fancy, limit case/if..then expressions, and think of the Elm Architecture as a pattern. Obviously my opinions are fact, but feel free to have your own in public at me @t_kelly9.

Being Fancy

Being Fancy is fun. Doing something tricky, or unexpected, can feel like you’ve found The Neatest Way to solve a problem. But being fancy when modeling data makes it harder to understand that data. For a functional language, in which it’s likely you’re doing a lot of mapping and filtering, creating complex types is likely to cause frustration. Writing or modifying decoders for particularly complicated types is likely to cause active sorrow and regret.

Here’s an enthusiastically contrived example of over-complicated types:

Suppose we have a commenting component that we made for use on our media website. Users can comment on books, songs, pictures, and videos. So far, we’ve only wanted users to be able to comment with plaintext, leading to a hijacking of our platform as a means of showcasing ascii art. Our model looks like this:



type alias Model =
    { mediaUrl : String
    , description : String
    , currentUser : User
    , comments : List Comment
    }

-- Comments --

type alias Comment =
    { commenter : User
    , comment : String
    , created : Date
    }

-- User --

type alias User =
    { name : String }

Embracing our userbase’s love of self-expression through images, we’ve decided to try out allowing users to comment only in the form of the media on a given page. That is, for books, the commenting system remains unchanged–users comment in words. For songs, users can comment by uploading an audio file. And so on.

Perhaps sensibly, perhaps not, we decide to create a type that describes the media type on a given view, so that we don’t get confused later and accidentally allow an audio comment on a video (that would be crazy!).



type alias Model =
    { mediaUrl : String
    , description : String
    , currentUser : User
    , comments : List MediaComment
    }


-- Comment --

type alias Comment =
    { commenter : User
    , comment : String
    , created : Date
    }


type MediaComment
    = Book Comment
    | Song Comment
    | Picture Comment
    | Video Comment


-- User --

type alias User =
    { name : String }

But at this point, we still only actually support comments of type String. Is that what we want? Maybe–we can use the comment field to store an actual comment’s text content for book comments, and then just a source url otherwise, but that’s pretty simplistic. What if we want our video comments to have poster images that are attached to video comment?

Really, what we want to do is extend the idea of a Comment record with a meta idea about a comment and the comment’s contents. Sounds like record extensibility, right?



type alias Model =
    { mediaUrl : String
    , description : String
    , currentUser : User
    , comments : List MediaComment
    }


-- Comment --

type alias Comment a =
    { a
    | commenter : User
    , created : Date
    }


type alias StringCommentContents =
    { comment : String }


type alias UrlCommentContents a =
    { a
    | src : String
    }


type alias TypeCommentContents =
    { type' : String }


type alias PosterCommentContents =
    { posterSrc : String }


type MediaComment
    = Book (Comment StringCommentContents)
    | Song (Comment (UrlCommentContents TypeCommentContents))
    | Picture (Comment (UrlCommentContents {}))
    | Video (Comment (UrlCommentContents PosterCommentContents))


-- User --

type alias User =
    { name : String }

Using record type extensibility may seem like a great idea, since the goal is to “extend” a type. But if your brain is remotely like mine, reading the above code was a frustrating experience. We’re not modeling a very complex system, but in the interest of keeping code DRY we’ve very quickly ended up in a brain-stack-overflow situation.

So how can we get ourselves out of this mess, and back to a friendly description of our commenting system? We “make a hole” rather than say “this thing is like this thing which is like this thing in these ways.”

Here, we maintain the idea of the “Media Comment,” which protects against accidentally using the wrong comment view for a given media type, but we use the “make a hole” strategy.



type alias Model =
    { mediaUrl : String
    , description : String
    , currentUser : User
    , comments : List MediaComment
    }


-- Comment --

type alias Comment a =
    { commenter : User
    , created : Date
    , comment : a
    }


type alias StringCommentContents =
    { comment : String }


type alias UrlCommentContents a b =
    { src : String
    , type' : a
    , posterSrc : b
    }


type MediaComment
    = Book (Comment String)
    | Song (Comment (UrlCommentContents String ()))
    | Picture (Comment (UrlCommentContents () ()))
    | Video (Comment (UrlCommentContents String String))

We can, of course, flatten this all the way back out if we want, but continue to use the “make a hole” strategy:



type alias Model =
    { mediaUrl : String
    , description : String
    , currentUser : User
    , comments : List MediaComment
    }


-- Comment --

type alias Comment a b c d =
    { commenter : User
    , created : Date
    , comment : a
    , src : b
    , type' : c
    , posterSrc : d
    }


type MediaComment
    = Book (Comment String () () ())
    | Song (Comment () String String ())
    | Picture (Comment () String () ())
    | Video (Comment () String String String)


-- User --

type alias User =
    { name : String }

This is better than the version using tons of extensibility, but there’s still too much complexity to comfortably keep track of. We can try decoupling our what-kind-of-media-idea from our what-a-comment-looks-like idea:



type alias Model =
    { mediaUrl : String
    , mediaType : Media
    , description : String
    , currentUser : User
    , comments : List Comment
    }


-- Comment --

type alias Comment =
    { commenter : User
    , created : Date
    , comment : Maybe String
    , src : Maybe String
    , type' : Maybe String
    , posterSrc : Maybe String
    }


type Media
    = Book
    | Song
    | Picture
    | Video


-- User --

type alias User =
    { name : String }

Here it becomes more obvious that we’ve been neglecting the information in our model about our actual media type, but leaving that aside, there are a couple of things to notice here. One, it’s the most succinct. Two, information about the necessary and expected shape of a given comment is lost–view code written with this code is going to be full of Maybe.map text model.comment |> Maybe.withDefault (text "oh no we're missing a comment how did this happen???")s. Three, it’s easy to understand what fields exist, but hard to know which fields are expected/mandatory/in use.

A final option for organizing this code: don’t try so hard to be DRY. Have different models/views for working with different comment types, and don’t worry about having overlap in those field names when your record is describing different shapes.



type alias Model =
    { mediaUrl : String
    , description : String
    , currentUser : User
    , comments : List MediaComment
    }


-- Comment --

type alias Comment a =
    { commenter : User
    , created : Date
    , content : a
    }


type MediaComment
    = Book (Comment String)
    | Song (Comment { src : String, songType : String })
    | Picture (Comment { src : String })
    | Video (Comment { src : String, videoType : String, posterSrc : String })


-- User --

type alias User =
    { name : String }

Whatever you decide, do be aware of the outsize impact in complexity/brain-case-space when using extensible records over some other options. Extensibility is for writing flexible functions, not for modeling your types. e.g., for our final example above, we could add another type that would make writing type signatures simpler, without making it harder for us to think about our model:



type alias UrlComment a =
    { a | src : String }

Again, this is a pretty contrived example (for as simple a concept as a comment, it probably makes the most sense to just separate out comment meta data from comment contents). For a complicated web application, though, it’s not unlikely to run into very complex structures on the frontend that can’t be easily broken down into the categories of “shared-shape-meta-data” and “differing-content-data.” My hope here is just that if you find yourself in a situation where your data modeling is getting out of hand across different pages (i.e., tons of different ways of representing very similar but non-identical information), you’ll be able to simplify your models without too much confusion.

case/if..then Expressions

Case expressions are awesome. Elm pattern matching is basically insane in a good way. Using case expressions cleverly can mean cutting down on extraneous/hard-to-follow if/else branches, but using too many case expressions can become hard to deal with.

Let’s make a view with a couple of steps to it. Say we’re making a game. We’re going to have a welcome screen, a game-play screen, and a game-over screen. Refreshing the page is clear cheatery, so let’s not worry about persistence. We’re also not going to worry about game logic. All we care about are the flow steps.

This is our model:



type alias Model =
    { playerName : String
    , moves : List ( Int, Int )
    , boardSize : (Int, Int)
    , gameStep : GameStep
    }

type GameStep
    = Welcome
    | GamePlay Turn
    | GameOver


type Turn
    = Player
    | ComputerPlayer

Let’s start out with a naïve view (naïve doesn’t mean don’t do this! It just means don’t stop your work here). Skipping game logic means there’s not much use to this view, but that should help us to focus in on good patterns to follow and less-good patterns to minimize.



view : Model -> Html Msg
view model =
    div
        []
        [ viewHeader model
        , viewGameBoard model
        ]


viewHeader : Model -> Html Msg
viewHeader model =
    header [] (headerContent model)


headerContent : Model -> List (Html Msg)
headerContent {gameStep, playerName} =
    case gameStep of
        Welcome ->
            [ div [] [ text ("Welcome, " ++ playerName ++ "!") ] ]

        GamePlay turn ->
            case turn of
                Player ->
                    [ div [] [ text ("It's your turn, " ++ playerName ++ "!") ] ]

                ComputerPlayer ->
                    [ div [] [ text "It's the computer's turn. Chillll." ] ]

        GameOver ->
            [ div [] [ text "Game Over!!!" ] ]


viewGameBoard : Model -> Html Msg
viewGameBoard model =
    case model.gameStep of
        Welcome ->
            text ""

        GamePlay turn ->
            buildBoard model.boardSize turn

        GameOver ->
            div [] [ text "This game ended. We're skipping game logic so who knows who won!" ]


buildBoard : (Int, Int) -> Turn -> Html Msg
buildBoard boardSize turn =
    let
        squareStyles =
            case turn of
                Player ->
                    style [ ("border", "1px solid green") ]

                ComputerPlayer ->
                    style [ ("border", "1px solid red") ]

    in
        tbody [] (buildBoardRow boardSize squareStyles)


buildBoardRow : (Int, Int) -> Attribute Msg -> List (Html Msg)
buildBoardRow (boardWidth, boardHeight) squareStyles =
    viewBoardSquare squareStyles
        |> List.repeat boardWidth
        |> List.repeat boardHeight
        |> List.map (tr [])


viewBoardSquare : Attribute Msg -> Html Msg
viewBoardSquare squareAttribute =
    td [ squareAttribute ] [ text "[ ]" ]

Okay, cool! So that works, as long as we’re fine with having an un-updateable model with corresponding view.

But there are a couple of things that are bad. One, we’re repeating case expressions based on game step all over the place. Game state is a very top level concern. We shouldn’t have to re-evaluate what step we’re on all over the place. Another less-than-stellar thing we’re doing is nesting case expressions. It’s harder to follow and, as a rule, not that necessary.

See if you like this view better:



view : Model -> Html Msg
view model =
    div [] (buildBody model)


buildBody : Model -> List (Html Msg)
buildBody {gameStep, playerName, boardSize} =
    case gameStep of
        Welcome ->
            [ viewHeader ("Welcome, " ++ playerName ++ "!") ]

        GamePlay Player ->
            [ viewHeader ("It's your turn, " ++ playerName ++ "!")
            , buildBoard boardSize "green"
            ]

        GamePlay ComputerPlayer ->
            [ viewHeader "It's the computer's turn. Chillll."
            , buildBoard boardSize "red"
            ]

        GameOver ->
            [ viewHeader "Game Over!!!"
            , div [] [ text "This game ended. We're skipping game logic so who knows who won!" ]
            ]


viewHeader : String -> Html Msg
viewHeader headerText =
    header [] [ text headerText ]


buildBoard : (Int, Int) -> String -> Html Msg
buildBoard boardSize boardColor =
    tbody [] (buildBoardRow boardSize boardColor)


buildBoardRow : (Int, Int) -> String -> List (Html Msg)
buildBoardRow (boardWidth, boardHeight) boardColor =
    viewBoardSquare boardColor
        |> List.repeat boardWidth
        |> List.repeat boardHeight
        |> List.map (tr [])


viewBoardSquare : String -> Html Msg
viewBoardSquare boardColor =
    td
        [ style [ ("border", "1px solid " ++ boardColor) ] ]
        [ text "[ ]" ]


It’s more succinct, but more importantly, it has wayyy less branching logic. All the logic over what to show/what not to show is at the top level, so it’s very easy to see what’s going on and what is going to be rendered to the page. This pattern allows for more easily generalizable components. To recap, the two moves we made here were to branch based on state at the top of our view and to eliminate sub-branching logic by making use of Elm’s pattern matching for tuples.

Hopefully, using this pattern will make it easier to extract pieces of your view for reuse, since they’ll end up being dependent on much less of the model than they would otherwise. Compare the board building method of our first try to our second try–the second one just wants to know how big and what color, and it’ll make you a board, while the first would like to know whose turn it is, please and thanks.

Elm Architecture as a Pattern

My final tip: don’t forget that the Elm Architecture is just a bunch of functions that sometimes get called! That is, update is just a function, and calling it recursively is totally fine if you want to.


Tessa Kelly
@t_kelly9
Engineer at NoRedInk

Maybe the Cookies API should not exist

$
0
0

One of the unique things about working at NoRedInk is that every other week we have “Elm Friday” where I pair with an engineer on something. Tessa and I worked on an alternate Array implementation. Noah and I made the navigation package for single-page apps. Scott helped create the react-elm-components package for embedding Elm in React apps. I think “Elm Friday” is wildly productive because each person brings unique perspective, interests, skills, knowledge, etc. and we augment each other. All of these are projects that are (1) great for the overall ecosystem and (2) projects that we would probably not have worked on individually.

Point is, Richard and I recently found ourselves working on elm-lang/cookie and came to a surprising conclusion: maybe it should not exist!

What are Cookies?

A cookie is a little piece of information associated with your domain. The important detail is that all that info gets collected and put into the Cookie header of every request you make. So if someone sets theme and user cookies, you would have a header like this on every HTTP request:

GET /spec.html HTTP/1.1
Host: www.example.org
Cookie: theme=light; user=abc123
…

How do you create cookies though? One way is to use the Set-Cookie header when your server sends the page in the first place. For example, the server could set the theme and user cookies by providing a response like this one:

HTTP/1.0 200 OK
Content-type: text/html
Set-Cookie: theme=light
Set-Cookie: user=abc123; Expires=Wed, 09 Jun 2021 10:18:14 GMT
…

Okay, so that is the basic functionality. On top of that, JavaScript provides a truly awful cookie API that can be understood as exposing the Set-Cookie header functionality at runtime.

We are in the process of exposing all these Web Platform APIs so you can use them from Elm directly. Just doing a thin wrapper around JS APIs is easy and quick, but the whole point of this language is trying to do better. So I want to provide the functionality in a way that is delightful and works well with Elm. I think the websocket library is a great example of this! So in approaching cookies, we asked ourselves:

  • What exactly are the capabilities provided by the browser?
  • How do we expect people to use these capabilities?
  • Are there traps that people are likely to fall into?
  • How can we design our API such that “doing the right thing” is the default?

In the end, a great platform API fits into The Elm Architecture and the overall ecosystem in just the right way. Ideally users do not even realize that there were other options. They just are writing oddly solid apps and feeling pretty good about it.

As we explored these questions, it revealed that almost everything about cookies is a trap.

Cookies have Problems

Security

You may have heard of “tracking cookies” that let companies see which pages you have visited. Here is how that works:

  1. A company convinces as many websites as possible to embed an “Like button” by adding a simple <script> tag to their page. Share this recipe on Facebook, Twitter, and Pinterest!

  2. The <script> can load arbitrary code that runs with full permissions. The have access to everything from the document to whatever parts of your JS code are globally accessible.

The ultimate goal of this setup is to uniquely identify a user, which is where cookies come in.

Say you visit hooli.com. They set a cookie that uniquely identifies you. Later you go to a blog with a “Hooli Share” button which is embedded as a <script>. They can run some code that figures out what URL you are visiting, how long you are there, etc. When they have all the info they want, they send an HTTP request to hooli.com which automatically gets any hooli.com cookies. That way they get the data tagged with your unique identifier. That means they know exactly who you are as well as what sites you are visiting. Pretty neat trick!

Now, I had a vague notion that people track me online, but before looking into cookies, I had no idea it was so easy to get so much information. So this seems like a pretty bad problem to me, but I suspect enough money is made off this that it is likely to continue to exist.

Memory Caps and Encoding

When I first described cookies up above, I used a theme cookie. The idea there was that we have some app with a light and dark theme. Maybe people want a dark theme at night or something! But does it make sense to store that information in a cookie? Probably not.

First problem, browsers cap the memory available for cookies. Some person on the internet suggests that the limit is 4093 bytes for your whole domain. Pretty tiny! It sounds like in the olden times, when you got to the max size, you just had to wait until some cookies expired. Now it sounds like it will silently evict old cookies. Either way, pretty bad.

Second problem, cookies can contain a very limited set of characters. For example, Safari permits ASCII characters only. In the version we tested, it just ignores your attempt to set cookies if it sees any characters it does not like. So if you have a string with Unicode characters outside that range, it will break in various ways depending on the browser.

So for a vast majority of “storage” scenarios, this API is significantly weaker than localStorage and sessionStorage.

Note: Using a cookie would also mean that the data is attached to every single HTTP request you make. As we learned, that is not too much data, but why bother sending it if the whole point of browser storage is that you do not handle it in your server?

A Better Way?

As far as I can tell, the only usage that really makes sense is the following scenario:

A user comes to your website http://mail.example.com and you want to serve them their inbox immediately, no sign in necessary.

In that case, you do want to have some additional information added by the browser that can help reliably identify and authenticate the user. You want to know that and nothing else.

Based on a unique identifier, you can look up things like theme from your database and send back the appropriately styled HTML. Basically, any other information can be derived from a unique ID. In this world, people would have a consistent experience no matter what device they log in from. You also own the customization data, so if you do an upgrade such that theme works different now, you can change it all at once on your servers. You do not have to wait for the user to sign back in, keeping crazy fallback code forever.

So it sounds like the following constraints could help:

  • A domain can only store a unique identifier.
  • The only way to set this unique identifier is with an HTTP response header, like a restricted Set-Cookie.
  • That unique identifier is only added to HTTP requests headers if the page was served from the same domain the request is going to.

That means I can log in on hooli.com without Hooli permanently knowing who I am when I check out a French Toast recipe.

There are problems though. I use Google Analytics to learn how many people are checking out Elm, and it is actually pretty handy that they can distinguish between visits and unique visitors. One person visiting 10 times is very different than 10 people visiting one time each! I think I could still get that information, but my servers would have to have some extra logic enabled to assign unique IDs. So it would be a bit harder to set up, but for inocent questions like, “how can the Elm website be more helpful?” it seems like this scheme would still work.

It seems like there is a lot of money in keeping the current design, so who knows if something like this will ever happen!

What to do in Elm?

It was getting pretty late on Friday by the time Richard and I really loaded all this information into our brains. We had drafted a pretty nice API and were kind of tired based on how insane everything was.

As we were finalizing the API and writing documentation, I asked Richard, if you only want a unique identifier, and you only want to set it with a header like Set-Cookie, why are we even doing this? The one valid scenario did not require this API! Neither of us could think of a compelling reason to set cookies any other way. Especially considering that the browser’s localStorage API covers the only other plausible use with a higher data cap and proper unicode support.

Elm is very serious about having the right defaults. People should just do the easy and obvious thing, and somehow, as if by accident, end up with a codebase that is quite nice. So libraries need to be pleasant to use, of course, but they also need to rule out bad outcomes entirely. And as far as we can tell, this means the cookie library should not exist!

Note: Richard and I could not think of legitimate uses for this API, but that may be a lack of creativity or experience. Open a friendly issue here if you think you have a scenario that cannot be covered by the Set-Cookie header.


Evan Czaplicki
@czaplic
Engineer at NoRedInk

Functional Randomization

$
0
0

When I first started playing with Elm, the discoveries went something like this:

  1. Wow, this is some powerful functional mojo.
  2. I bet this would be a blast to make games with!
  3. I just need a random number…
  4. I better go read half of the Internet…
  5. Have I made bad life choices???

Laugh all you want, but back then Elm’s documentation of the Random module opened with a twenty line example. I was using Ruby everyday where I had rand()!

Why are functional programming and random number generation mortal enemies? The answer is kind of interesting…

Seeds

To make sense of why this topic is complicated, let’s begin with some exploration in a seemingly less strict language. Here’s the easiest way to start pulling random numbers out of Elixir (or Erlang):

$ iex
Erlang/OTP 18 [erts-7.3] [source] [64-bit] [smp:8:8] [async-threads:10] [hipe] [kernel-poll:false] [dtrace]

Interactive Elixir (1.2.5) - press Ctrl+C to exit (type h() ENTER for help)
iex(1)> :rand.uniform
0.006229498385012341
iex(2)> :rand.uniform
0.8873819908035944
iex(3)> :rand.uniform
0.23952446122301357

To prove those are really random, I’ll repeat the process and hope we see new numbers:

$ iex
Erlang/OTP 18 [erts-7.3] [source] [64-bit] [smp:8:8] [async-threads:10] [hipe] [kernel-poll:false] [dtrace]

Interactive Elixir (1.2.5) - press Ctrl+C to exit (type h() ENTER for help)
iex(1)> :rand.uniform
0.9471495706406268
iex(2)> :rand.uniform
0.9593850716627006
iex(3)> :rand.uniform
0.19095631066267954

OK, they look different. I better pull a rabbit out of my hat quick, because this is shaping up to be one lame blog post.

Let’s ask for a random number one more time, but let’s keep an eye on the process dictionary as we do it this time. What’s a process dictionary you ask? It’s Elixir’s dirty little secret. Inside every process, there’s some mutable data storage. Oh yeah, believe it:

$ iex
Erlang/OTP 18 [erts-7.3] [source] [64-bit] [smp:8:8] [async-threads:10] [hipe] [kernel-poll:false] [dtrace]

Interactive Elixir (1.2.5) - press Ctrl+C to exit (type h() ENTER for help)
iex(1)> Process.get_keys
[:iex_history]
iex(2)> :rand.uniform
0.6585831964907924
iex(3)> Process.get_keys
[:iex_history, :rand_seed]

Look a rabbit, err, a :rand_seed!

Before we asked for a random number, the process dictionary only contained some data from my REPL session. But after the call, there was random number related data in there.

Let’s dig a little deeper by watching the seed as well pull another number:

iex(4)> seed = Process.get(:rand_seed)
{%{max: 288230376151711743, next: #Function,
   type: :exsplus, uniform: #Function,
   uniform_n: #Function},
 [41999190440545321 | 147824492011192469]}
iex(5)> :rand.uniform
0.5316998602996328
iex(6)> Process.get(:rand_seed)
{%{max: 288230376151711743, next: #Function,
   type: :exsplus, uniform: #Function,
   uniform_n: #Function},
 [147824492011192469 | 5427558722783275]}

What did we see there? First, seeds are gross looking things. More importantly though, it’s changing as we pull numbers. You can tell by comparing the numbers in the improper list at the end of the seed.

What are these changing seeds?

You can kind of think of random number generation as a giant list of good numbers available for picking. (Yes, I’m simplifying here, but stick with me for a moment…) It’s like someone sat down at the dawn of computing, rolled a dice way too many times, and carved all the numbers into stone tablets for us to use later on. The problem is, if we always started at the beginning of the list, the sequence would always be the same and video games would be hella boring!

How do we make use of this master list? We seed it! You can think of that as your computer closing its eyes, pointing its finger at a random spot in the list, and saying, “I’ll pick this number next.” Except that your computer probably doesn’t have eyes or a finger. Anyway, that’s how you get random numbers out of a random list… randomly. (At least it’s close enough for our purposes. I know I lied some. I’m sorry. It’s for a good cause.)

Getting back to our example, it turns out there’s another function that we could call to get random numbers. We were using :rand.uniform/0, but I think it’s time to try :rand.uniform_s/1. What does the _s stand for? I have no idea. Let’s pretend it’s “seeded” solely because it helps my story here and you want me to succeed. :rand.uniform_s/1 expects to be passed a seed and I stuck the first one we examined in a seed variable. Right after we saw that seed last time, we got the number 0.5316998602996328 out of the generator. You can scroll up if you think I would make something like that up. Or you could scroll down:

iex(7)> :rand.uniform_s(seed)
{0.5316998602996328,
 {%{max: 288230376151711743, next: #Function,
    type: :exsplus, uniform: #Function,
    uniform_n: #Function},
  [147824492011192469 | 5427558722783275]}}

The return value has changed from what :rand.uniform/0 gives us, but the first item in the tuple is the random number that we saw before. The second item in the tuple is a new seed. You’ve seen it before too. It was in our process dictionary after we generated 0.5316998602996328 last time. I can’t believe you didn’t recognize it!

When using the _s functions, Elixir leaves it to us to track the seeds. Anytime we fetch a number, it gives us that and a new seed for the next time we need one. It’s kind of like generating two numbers each time, one to use and one to be the new finger. Well, you know what I mean!

In Elixir, asking for a seed is an impure function that returns a different answer with each call. (Oops, I lied again! More on that later…) Again, compare the numbers at the end of each seed to see the differences:

$ iex
Erlang/OTP 18 [erts-7.3] [source] [64-bit] [smp:8:8] [async-threads:10] [hipe] [kernel-poll:false] [dtrace]

Interactive Elixir (1.2.5) - press Ctrl+C to exit (type h() ENTER for help)
iex(1)> Process.get_keys
[:iex_history]
iex(2)> :rand.seed_s(:exsplus)
{%{max: 288230376151711743, next: #Function,
   type: :exsplus, uniform: #Function,
   uniform_n: #Function},
 [88873856995298220 | 199352771829937909]}
iex(3)> :rand.seed_s(:exsplus)
{%{max: 288230376151711743, next: #Function,
   type: :exsplus, uniform: #Function,
   uniform_n: #Function},
 [12382884451496331 | 275984276003718626]}
iex(4)> :rand.seed_s(:exsplus)
{%{max: 288230376151711743, next: #Function,
   type: :exsplus, uniform: #Function,
   uniform_n: #Function},
 [252423044625965483 | 35827924244841321]}

The :exsplus argument specifies the algorithm to use, which is probably the source of those anonymous functions we keep seeing in the seeds. I’ve used the default that Elixir would pick itself, if we let it choose the seed.

Now, using seeds is a pure function call that always yields the same results:

iex(5)> seed = :rand.seed_s(:exsplus)
{%{max: 288230376151711743, next: #Function,
   type: :exsplus, uniform: #Function,
   uniform_n: #Function},
 [256126833480978113 | 32213264852888463]}
iex(6)> :rand.uniform_s(seed)
{0.2372892113261949,
 {%{max: 288230376151711743, next: #Function,
    type: :exsplus, uniform: #Function,
    uniform_n: #Function},
  [32213264852888463 | 36180693784403711]}}
iex(7)> :rand.uniform_s(seed)
{0.2372892113261949,
 {%{max: 288230376151711743, next: #Function,
    type: :exsplus, uniform: #Function,
    uniform_n: #Function},
  [32213264852888463 | 36180693784403711]}}
iex(8)> Process.get_keys
[:iex_history]

Also, notice that we’re no longer using secret mutable storage to generate numbers this way.

How do we use all of this in actual work? We don’t. We typically just cheat and call :rand.uniform/0. But now you know how the cheating works!

For the sake of example, and blog post line count, here’s a simple script doing things The Right Way™. It streams random numbers, a new one each second:

defmodule ARandomNumberASecond do
  defstruct [:callback, :rand_seed]

  def generate(callback) when is_function(callback) do
    state = %__MODULE__{callback: callback, rand_seed: :rand.seed_s(:exsplus)}
    do_generate(state)
  end

  defp do_generate(
    state = %__MODULE__{callback: callback, rand_seed: rand_seed}
  ) do
    {number, new_rand_seed} = :rand.uniform_s(rand_seed)
    callback.(number)

    :timer.sleep(1_000)

    new_state = %__MODULE__{state | rand_seed: new_rand_seed}
    do_generate(new_state)
  end
end

ARandomNumberASecond.generate(fn n -> IO.puts n end)

The main trick here is that we keep passing forward a fresh seed. The first one is created in ARandomNumberASecond.generate/1. After that, we keep track of each seed returned from :rand.uniform_s/1 and use that in the next call. Look Ma, no mutable state!

Calling this script will give you a a list that looks something like the following. If your output is exactly the same, you should have bought a lottery ticket this week:

$ elixir a_random_number_a_second.exs
0.8858439425003749
0.8401632592437187
0.22632370083022735
0.8504749565721951
0.22907809547742136
0.302700583368534
0.756432821571183
0.8986912440205949
0.10409795947101291
0.04221231517545583
…

Side Effects

Meanwhile, back in Elm, the evil wizard Random has driven all side effects from the land…

A major goal of functional programming is to limit side effects. We want to be working with as many side effect free, pure functions as possible. Side effects are much harder to reason about and control, so they end up being a major source of bugs in our code.

Elm, in particular, takes this call to arms very seriously. The entire process of seeding and generating random numbers is side effect free. Let’s get into how that works:

$ elm repl
---- elm repl 0.17.0 -----------------------------------------------------------
 :help for help, :exit to exit, more at <https:>
--------------------------------------------------------------------------------
> import Random
> Random.initialSeed 1
Seed { state = State 2 1, next = <function>, split = <function>, range =     <function> }
    : Random.Seed
> Random.initialSeed 1
Seed { state = State 2 1, next = <function>, split = <function>, range =     <function> }
    : Random.Seed
> Random.initialSeed 2
Seed { state = State 3 1, next = <function>, split = <function>, range =     <function> }
    : Random.Seed

You can see here that you need to provide some number when requesting a seed in Elm. If you use the same number, you get the same seed (compare the State numbers). Different numbers yield different seeds.

This makes testing code that makes use of random numbers quite easy. You can create the seed with a known value in tests, pass that into your code, and magically predict the balls as they fall out of the tumbler. That’s a powerful benefit.

A quick aside in defense of Elixir’s honor: it also supports seeding from known values. I just didn’t show it. Throwing Elixir under the bus: you seed it with a tuple of three integers because that’s maximally weird.

Back to Elm. Again.

Elm has another abstraction at play: generators. A generator knows how to build whatever you are after whether that’s integers in some range, booleans, or whatever. They can also be combined, so Random.list could fill a list of a desired size with integers or booleans. Here’s how you build them:

> Random.int -1 1
Generator <function> : Random.Generator Int
> Random.list 10 (Random.int -1 1)
Generator <function> : Random.Generator (List Int)

OK, that’s pretty dull, but we can combine generators and seeds to produce numbers:

> Random.step (Random.int -1 1) (Random.initialSeed 1)
(1,Seed { state = State 80028 40692, next = <function>, split = <function>,     range = <function> })
    : ( Int, Random.Seed )
> Random.step (Random.int -1 1) (Random.initialSeed 1)
(1,Seed { state = State 80028 40692, next = <function>, split = <function>,     range = <function> })
    : ( Int, Random.Seed )
> Random.step (Random.int -1 1) (Random.initialSeed 1)
(1,Seed { state = State 80028 40692, next = <function>, split = <function>,     range = <function> })
    : ( Int, Random.Seed )
> Random.step (Random.int -1 1) (Random.initialSeed 61676)
(0,Seed { state = State 320459915 40692, next = <function>, split = <function>, range = <function> })
    : ( Int, Random.Seed )
> Random.step (Random.list 10 (Random.int -1 1)) (Random.initialSeed 100)
([1,1,0,1,1,-1,-1,1,1,0],Seed { state = State 1625107866 1858572493, next =     <function>, split = <function>, range = <function> })
    : ( List Int, Random.Seed )
> Random.step (Random.list 10 (Random.int -1 1)) (Random.initialSeed 42)
([1,-1,-1,-1,0,-1,1,-1,-1,-1],Seed { state = State 2052659270 1858572493, next = <function>, split = <function>, range = <function> })
    : ( List Int, Random.Seed )

Note that we get generated item and new seed pairs, just like we saw in Elixir. You can also see that the same seed always produces the same value.

That leaves us with just one question, but it’s a big one: where do we get the number needed to create the initial seed? Hardcoding some value works for your tests and this blog post, but doing it for your application proper kind of defeats the point of randomization, am I right?

In my next blog post… OK, OK, we’ll discuss it now. Calm down.

To get an initial seed we’re obviously going to need to do something in a place more tolerant of side effects. It just so happens that Elm code is bootstrapped in JavaScript, where side effects are perfectly legal. We can use JavaScript’s random number generator to kick start random number generation inside of Elm’s side effect free walls.

To show what that process might look like, let’s recreate our number a second streamer in Elm. We begin with this HTML:

<meta charset="utf-8"><title>A Random Number A Second</title><script src="ARandomNumberASecond.js" type="text/javascript" charset="utf-8"></script><script type="text/javascript" charset="utf-8">
      Elm.ARandomNumberASecond.fullscreen({
        randSeed: Math.floor(Math.random() * 0xFFFFFFFF)
      });
    </script>

The key bit here is that we pass a program flag into Elm’s invocation. This is a randSeed pulled from JavaScript’s random number generator.

Now we need to dig into the Elm code to see how to make use of that flag. We’ll start with general program concerns:

module ARandomNumberASecond exposing (main)

import Html exposing (..)
import Html.App
import Time exposing (Time, second)
import Random


type alias Flags = {randSeed : Int}


main : Program Flags
main =
  Html.App.programWithFlags
    { init = init
    , update = update
    , view = view
    , subscriptions = subscriptions
    }

This code just imports what we need and sets up execution in version 0.17 style. Note that we do tell Elm that this is a programWithFlags. The model is where we can finally make use of that value:

-- MODEL


type alias Model =
  { currentSeed : Random.Seed
  , numbers : List Int
  }


init : Flags -> (Model, Cmd Msg)
init {randSeed} =
  ( { currentSeed = Random.initialSeed randSeed
    , numbers = [ ]
    }
  , Cmd.none
  )

Elm will pass all flags to your init function. Here that allows us to construct the initial seed. We bootstrap the Model to keep track of two bits of state: the current Random.seed and our List of generated numbers for display.

Here’s the event handling code:

-- UPDATE


type Msg = Tick Time


update : Msg -> Model -> (Model, Cmd Msg)
update _ {currentSeed, numbers} =
  let
    (number, nextSeed) = Random.step (Random.int 1 100) currentSeed
  in
    ( { currentSeed = nextSeed
      , numbers = numbers ++ [number]
      }
    , Cmd.none
    )

We expect to receive Msgs that have the current time inside of them, one each second. We can actually ignore the time, but use its arrival as our cue to append a new Random number onto our list. This code pushes nextSeed forward, just as we did in Elixir.

Next, we need to sign up to receive those TickMsgs:

-- SUBSCRIPTIONS


subscriptions : Model -> Sub Msg
subscriptions _ =
  Time.every second Tick

This code subscribes to receive the Time from the Elm runtime each second, wrapped in a Tick. This passes impure responsibilities on to Elm itself and leaves our code pure.

The final bit is just to render the current list of numbers:

-- VIEW


view : Model -> Html Msg
view {currentSeed, numbers} =
  case numbers of
    [ ] ->
      viewLoading
    _ ->
      viewNumbers numbers


viewLoading : Html Msg
viewLoading =
  text "Loading..."


viewNumbers : List Int -> Html Msg
viewNumbers numbers =
  div [ ] (List.map viewNumber numbers)


viewNumber : Int -> Html Msg
viewNumber n =
  p [ ] [text (toString n)]

That is some very basic HTML generation because this is a programming blog post. Had this been a design blog post, it would have looked gorgeous. Also, it wouldn’t have been written by me.

The only trick in the code above is that we render a "Loading..." message until the first time event shows up about one second into our run. We detect that case by pattern matching on our still empty list of numbers.

You can find the full code for this example in this Gist, if you want to play with it yourself.

When I run the code, my browser begins filling up with numbers:

96

100

17

86

32

60

98

59

97

80

…

It’s worth noting that this isn’t the only way to generate random numbers in Elm 0.17. You can instead generate Cmds that the Elm runtime transforms into events your code receives. This pushes seeding concerns out of your code while still keeping it pure. The documentation has an example of such usage. I’ve used the “manual” approach in this code because it’s harder and I wanted to impress you.

That’s the show, folks. Hopefully you’ve seen how random number generation works in both Elixir and Elm. Maybe you even gained some insight into why this process works the way it does. If you have, please write to me and explain it. Thanks!

James Edward Gray II@JEG2 Engineer at NoRedInk

Running our First Design Sprint

$
0
0

This is a cameo post from our amazing Head of Product, Jocey Karlan!

Make Learning Fun

Learning should be fun. Amidst the many debates raging in the education space, this concept is rarely contested. But while we may agree on the mission of creating inspiring, supportive, and fun learning environments, the method is harder to pin down. Certainly, learning can be fun, but it can also be arduous, tiring, and, at times, frustrating.

At NoRedInk, we believe in mastery-based learning, a paradigm that requires students to prove their understanding of a concept before progressing to the next one. This is drastically different from many traditional models where students move from unit to unit or grade to grade based on a strict timeline. In the world of mastery-based learning, students can’t simply move on after 20 minutes or 20 questions. Rather, they progress at their own pace as they learn.

When we take away easy outs and guaranteed advancement, preserving the fun of learning presents a greater challenge. To make learning fun, we must…

  • Foster a growth mindset and motivate students who are really struggling
  • Develop resources to help students feel supported as they work
  • Celebrate progress and not just completion
  • Provide a delightful visual environment that fosters joy as well as learning

Sprint

Last winter, an advance copy of Sprint arrived from our investors at Google Ventures. Our Product team was drawn to the book’s core premise: carve out 5 days to work on nothing but a single tough problem, condensing a full design-develop-test cycle into one week.

We had a tough problem at hand. For months, teachers and students had voiced feedback about our mastery-based practice engine. We heard regularly from students who felt discouraged or frustrated by progress bars that filled for correct answers and then emptied for mistakes. Instead of fun, for a subset of students, our engine was creating stress. Though we had tried to chip away at developing a solution, we had made little progress.

Thus, Sprint proposed an appealing course of action, and we set out to answer one core question: How might we make students feel like they’re continually making positive progress toward mastery? We called this question “the mastery problem.”

The Team

We put together a cross-departmental team composed of one PM (myself), one developer, and two UX designers.

The Plan

Sprint suggests setting aside 5 neatly slated days. The schedule progresses from mapping out the problem, to sketching solutions, to deciding on a best option, to building a prototype, to finally testing with real users.

The Plan

We used this calendar as inspiration and built a schedule that allowed more time for prototyping and less for decision-making. This was made possible by assigning pre-work to each member of the Sprint team, which included preparing competitor analyses and reading messages from teachers and students.

The Schedule

Day 1:Map and Sketch

  • Lightning talks: Members of our Support, School Partnerships, Product, and Engineering teams gave 15-minute presentations, sharing their context on the mastery problem.
  • Competitor analysis: Each member of the Sprint team presented on 2-3 mastery-based learning platforms.
  • Problem mapping: The Sprint team organized our notes and ideas into various themes (see image below).
  • Sketching: We tried a few sketching activities suggested in Sprint and also allowed for quiet, contained sketch time.

Whiteboard

Day 2: Decide

  • Gallery walk: Each member of the Sprint team posted 1-2 solution sketches around the room. With stickers in hand, we marked the elements of each design that stood out to us.
  • Core themes: We looked for trends in the solution sketches and identified the features that we wanted to implement in our prototype. We trimmed down the list to 6 core themes.
  • Ongoing sketching: We spent another hour sketching and discussing ideas as a group before electing one of our designers to mock up our favorite designs.

Days 3-5: Prototype Here’s where we really “broke process”: Sprint recommends building an extremely low-fidelity prototype that leverages Keynote or prototyping software like InVision. In our case, however, these options wouldn’t cut it. InVision and Keynote are fantastic for testing in controlled environments with a limited number of possible user behaviors. We, in contrast, needed to give kids of widely varying abilities enough freedom to succeed or struggle; only then could we test authentic emotional reactions to our interface.

With exponential possible paths to completing our activity, we opted to engineer a solution (see GIF below). While this solution took a few extra days, we were able to move to largely asynchronous communication to free up team members not immediately involved.

Screenshot

Day 6: Test On the last day of our Sprint, we visited a San Francisco high school and worked 1-on-1 with 8 freshmen students. In case you’ve never done user testing with 14-year-olds, it’s worth noting that their brutal honesty provides data of unmatched quality. These students’ insights and reactions informed many of the changes we implemented after our Sprint.

In summary, here’s how our schedule turned out:

Schedule

What’s Next

In the weeks that followed, we visited one other school to work with students of a different age group and demographics. One of our designers continued to build out various interactions, pairing closely with two developers. Our Product team started to flesh out a spec and brought in a QA analyst to evaluate those initial guidelines.

Conducting our first design Sprint allowed our team to take a long-standing, messy, and emotional problem and come to a solution. Today, we’ve committed to addressing the “mastery problem” in the first quarter of 2017, an undertaking that requires redesigning our entire quiz engine, overhauling old code, and moving student answer data to a new part of our database. While much work remains, we’re confident in developing a solution informed by and built for our users. We can’t wait to release to production.

Like learning itself, creating fun learning experiences for students isn’t easy. But tough problems, a great team, and an inspiring mission are what make being a PM at NoRedInk a delight. Looking for your next role? We’re hiring.

Stay tuned for a follow-up poston how ourmastery problem became our mastery solution.


Jocey Karlan
Head of Product at NoRedInk

Picking Dates with Elm

$
0
0

Introduction

A frontend developer sometimes just wants to drop a JavaScript widget somebody else made into their application and move on. Maybe that widget is a slider, or a menu, or a progress bar, or a spinner, or a tab, or a tooltip that points in a cool way.. And sometimes that same frontend developer would like to write their application in Elm.. Should this developer wait to use Elm until the widget they want is rewritten in Elm? Should they rewrite everything that they need?

NoRedInk ran into this problem a few years ago with datepickers. Now, there are some Elm datetimepicker options, but at the time we needed prioritize building a datepicker from scratch in Elm against using the JS datepicker library we had been using before. We put building an Elm datepicker on the hackday idea pile and went with the JS datepicker library. Even with the complications of dates and time, using a JS datepicker in an Elm application ended up being a fine experience.

So our frontend developer who wants a JS widget? They can use it.

Readers of this post should have some familiarity with the Elm Architecture and with Elm syntax, but do not need to have made complex apps. This post is a re-exploration of concepts presented in the Elm guide (Introduction to Elm JavaScript interop section) with a more ~timely~ example (that is, we’re going to explore dates, datepickers, and Elm ports).

On Dates and Time

The local date isn’t just a question of which sliver of the globe on which one is located: time is a consideration of perception of time, measurability, science, and politics.

As individuals, we prefer to leave datetime calculations to our calendars, to our devices, and to whatever tells our devices when exactly they are. As developers, we place our faith in the browser implentations of functions/methods describing dates and times.

To calculate current time, the browser needs to know where the user is. The user’s location can then be used to look up the timezone and any additional time-weirdnesses imposed by the government (please read this as side-eyes at daylight saving–I stand with Arizona). When you run new Date() in your browser’s JS console, apart from constructing the Date you asked for, you’re actually asking for your time as a function of location.

Supposing we now have a Date object that correctly describes the current time, we have the follow-up problem of formatting dates for different users. Our users might have differing expectations for short-hand formats and will have differing expecations for long-hand forms in their language. There’s definitely room to make mistakes; outside of programming, I have definitely gotten confused over 12 hour days versus 24 hour days and mm/dd/yyyy versus dd/mm/yyyy.

Okay, so computers need a way to represent time, timezones, daylight savings. We use the distance in seconds from the epoch to keep track of time. (If you read about the history of the Unix epoch, that’s not as simple as one might hope or expect either!) Then we need a language for communicating how to format this information for different locales and languages.

We can represent dates in simple and universial formats. We can use semantic and consistent (or close-to semantic and close-to consistent) formatting strings. We can be careful as we parse user input so that we don’t mix up month 2 or day 2. But it’s still really easy to make mistakes. It’s hard to reason about what is going, did go, or will go wrong; sometimes, when deep in investigating a timezone bug, it’s hard to tell what’s going right!

So suppose we’ve got ourselves a great spec that involves adding a date input to a pre-existing Elm app. Where do we start? What should we know?

It’s worth being aware that the complexity of date/time considerations of the human world haven’t been abstracted away in the programming world, and there are at times some additional complications. For example, the JavaScript Date API counts months from zero and days from one. Also worth noting: Dates in Elm actually are JavaScript Date Objects, and date Objects in JavaScript rely on the underlying JavaScript implementation (probably C++).

On Interop

The way that Elm handles interop with JavaScript keeps the world of Elm and the world of JavaScript distinct. All the values from Elm to JS flow through one place, and all the values from JS to Elm flow through one place.

Tradeoffs:

  1. It’s possible to break your app

    Suppose we have an Elm app that is expecting a user-named item to be passed through a port. Our port is expecting a string, but oops! Due to some unanticipated type coercion, we pass 2015 through the port rather than "2015". Now our app is unhappy–we have a runtime error:

    Trying to send an unexpected type of value through port userNamedItem: Expecting a String but instead got: 2015

  2. Your Elm apps have JS code upon which they are reliant

    Often, this isn’t a big deal. We used to interop with JavaScript in order to focus our cursor on a given text input dynamically (Now, we use Dom.focus). It’s a nice UX touch, but our site still works without this behavior. That is, if we decide to load our component on a different page, but fail to bring our jQuery code to the relevant JS files for that page, the user experience degrades, but the basic functionality still works.

Benefits:

  1. We can use JavaScript whenever we want to

    If you’ve got an old JS modal, and you’re not ready to rewrite that modal in Elm, you’re free to do so. Just send whatever info that modal needs, and then let your Elm app know when the modal closes.

  2. The single most brittle place in your code is easy to find

    Elm is safe, JavaScript is not, and translating from one to the other may not work. Even without helpful error messages, it’s relatively easy to find the problem. If the app compiles, but fails on page load? Probably it’s getting the wrong information.

  3. We keep Elm’s guarantees.

    We won’t have to worry about runtime exceptions within the bulk of our application. We won’t have to worry about types being inconsistent anywhere except at the border of our app. We get to feel confident about most of our code.

So.. how do we put a jQuery datepicker in our Elm application?

For this post, we’ll be using the jQuery UI datepicker, but the concepts should be the same no matter what datepicker you use. Once the jQuery and jQuery UI libraries are loaded on the page and the basic skeleton of an app is available on the page, it’s a small step to having a working datepicker.

Our skeleton:


{- *** API *** -}
port module Component exposing (..)

import Date
import Html exposing (..)
import Html.Attributes exposing (..)


main : Program Never Model Msg
main =
    Html.program
        { init = init
        , view = view
        , update = update
        , subscriptions = always Sub.none
        }


init : ( Model, Cmd Msg )
init =
    ( { date = Nothing }, Cmd.none )



{- *** MODEL *** -}


type alias Model =
    { date : Maybe Date.Date }



{- *** VIEW *** -}


view : Model -> Html.Html Msg
view model =
    div [ class "date-container" ]
        [ label [ for "date-input" ] [ img [ alt "Calendar Icon" ] [] ]
        , input [ name "date-input", id "date-input" ] []
        ]



{- *** UPDATE *** -}


type Msg
    = NoOp


update : Msg -> Model -> ( Model, Cmd Msg )
update msg model =
    case msg of
        NoOp ->
            model ! [ Cmd.none ]

Next up, let’s port out to JS. We want to tell JS-land that we want to open a datepicker, and then we also want to change our model when JS-land tells us to.


port module Component exposing (..)

import Date
import Html exposing (..)
import Html.Attributes exposing (..)
import Html.Events exposing (..) -- we need Events for the first time


main : Program Never Model Msg
main =
    Html.program
        { init = init
        , view = view
        , update = update
        , subscriptions = subscriptions
        }


init : ( Model, Cmd Msg )
init =
    ( { date = Nothing }, Cmd.none )



{- *** MODEL *** -}


type alias Model =
    { date : Maybe Date.Date }



{- *** VIEW *** -}


view : Model -> Html.Html Msg
view model =
    div
        [ class "date-container" ]
        [ label [ for "date-input" ] [ img [ alt "Calendar Icon" ] [] ]
        , input
            [ name "date-input"
            , id "date-input"
            , onFocus OpenDatepicker
              -- Note that the only change to the view is here
            ]
            []
        ]



{- *** UPDATE *** -}


type Msg
    = NoOp
    | OpenDatepicker
    | UpdateDateValue String


update : Msg -> Model -> ( Model, Cmd Msg )
update msg model =
    case msg of
        NoOp ->
            model ! [ Cmd.none ]

        OpenDatepicker ->
            model ! [ openDatepicker () ]

        UpdateDateValue dateString ->
            { model | date = Date.fromString dateString |> Result.toMaybe } ! []



{- *** INTEROP *** -}


port openDatepicker : () -> Cmd msg


port changeDateValue : (String -> msg) -> Sub msg


subscriptions : Model -> Sub Msg
subscriptions model =
    changeDateValue UpdateDateValue

Note that here, we’re also carefully handling the string that we’re given from JavaScript. If we can’t parse the string into a Date, then we just don’t change the date value.

Finally, let’s actually add our Elm app and datepicker to the page.


$(function() {
  elmHost = document.getElementById("elm-host")
  var app = Elm.Component.embed(elmHost);

  $.datepicker.setDefaults({
    showOn: "focus",
    onSelect: sendDate,
  });

  app.ports.openDatepicker.subscribe(function() {
    $("#date-input").datepicker().datepicker("show");
  });

  function sendDate (dateString) {
    app.ports.changeDateValue.send(dateString)
  }
});

Checking this out in the browser (with a few additional CSS styles thown in):

All we have to do is embed our app, open the datepicker when told to do so, and send values to elm when appropriate! This is the same strategy to follow when working with any JS library.

Fancy Stuff

Storing the final-word on value outside of a UI component (i.e., the datepicker itself) makes it easier to handle complexity. At NoRedInk, engineers have built quite complicated UIs involving datepickers:

NoRedInkers changed the displayed text from a date-like string to “Right away”–and made /right away/i an allowed input

We can check to see if the selected date is the same as now, plus or minus some buffer, and send a string containing that information to Elm. This requires a fair amount of parsing and complicates how dates are stored in the model.

A simplified version of a similar concept follows–we add some enthusiasm to how we’re displaying selected dates by adding exclamation marks to the displayed date.

Note that this introduces a new dependency for date formatting (rluiten/elm-date-extra).


...

import Date
import Date.Extra.Config.Config_en_us
import Date.Extra.Format

...

viewCalendarInput : Int -> Maybe Date.Date -> Html Msg
viewCalendarInput id date =
    let
        inputId =
            "date-input-" ++ toString id

        dateValue =
            date
                |> Maybe.map (Date.Extra.Format.format Date.Extra.Config.Config_en_us.config "%m/%d/%Y!!!")
                |> Maybe.withDefault ""
    in
        div [ class "date-container" ]
            [ label [ for inputId ] [ viewCalendarIcon ]
            , input
                [ name inputId
                , Html.Attributes.id inputId
                , value dateValue
                , onFocus (OpenDatepicker inputId)
                ]
                []
            ]

...

We can make the value of the input box whatever we want! Including a formatted date string with exclamation marks on the end. Note though that if we make whatever is in our input box un-parseable for the datepicker we’re using, we’ll have to give it more info if we want it to highlight the date we’ve selected when we reopen it. Most datepickers have a defaultDate option, and we can use take advantage of that to handle this case.

Note that we’ve also generalized our viewCalendarInput function. There are some other changes that we need to make to support having multiple date input fields per page–like having more than one date field on the model, and sending some way of determining which date field to update back from JS.

For brevity’s sake, we’ll exclude the code for supporting multiple date inputs per page, but here’s an image of the working functionality:

NoRedInkers created an autofill feature

Leveraging the type system, we can distinguish between user-set and automagically-set dates, and set a series of date steps to be any distance apart from each other by default. The fun here is in determining when to autofill–we shouldn’t autofill, for instance, after a user has cleared all but one autofilled field, but we should autofill if a user manually fills exactly one field.

We actually decided that while this was slick, it would create a negative user experience; we scrapped the whole autofill idea before any users saw it. While there was business logic that we needed to rip out in order to yank the feature, we didn’t need to change any JavaScript code whatsoever. Writing the autofill functionality was fun, and then pulling out the functionality went really smoothly.

NoRedInkers supported user-set timezone preferences

I recommend rluiten/elm-date-extra, which supports manually passing in a timezone offset value and using the user’s browser-determined timezone offset. Thank you to Date-related open source project maintainers and contributors!

Concluding

Someday the Elm community will have a glorious datepicker that developers use by default. For now, there are JavaScript datepickers out there available for use (and some up-and-coming Elm datepicker projects as well!). For now, for developers not ready to switch away from jQuery components, interop with JavaScript can smoothly integrate even very effect-heavy libraries.

There are components that don’t exist in Elm yet, but that shouldn’t stop us from using them in our Elm applications and it shouldn’t stop us from transitioning to Elm. Projects that need those components can still be written in beautiful, easy-to-follow, easy-to-love Elm code. Sure, it would be nice if it were all in Elm–for now, we can use our JavaScript component AND Elm!


Tessa Kelly
@t_kelly9
Engineer at NoRedInk

Viewing all 193 articles
Browse latest View live