Quantcast
Channel: NoRedInk
Viewing all 193 articles
Browse latest View live

Learning Elm from scratch

$
0
0

Hello from a brand new Junior Engineer at NoRedInk! I started working at NoRedInk in January and it’s both my first job as a Software Engineer and my first time using Elm. Thinking about learning Elm? Here’s what it was like to learn Elm from scratch!

The Beginning

A brief background on my programming experience: in 2016, I attended a coding bootcamp that taught Ruby on Rails and JavaScript and stayed on for a year as a TA. I’d done some programming in Matlab during college, but little else before attending the bootcamp. After receiving an offer from NoRedInk, I spent three computer-free months road tripping across the United States. This all means that I had a) very little experience working with purely functional and statically-typed languages, b) programming skills coated in a three-month layer of road trip dust, and c) equally matched levels of excitement and terror over learning Elm well enough to contribute to NRI’s codebase.

My initial terror towards starting a new job and learning a new language quickly abated. I spent 100% of my first week pairing with more experienced engineers on the team, which was both educational and fun (the NRI team is an amusing bunch). Nonetheless, my confusion abounded, stemming mainly from the size of the codebase (Help! where does everything live?) and my unfamiliarity with Elm. While some parts of Elm made intuitive sense, other aspects of the language felt perplexing and mysterious.

Battling Confusion

Elm was the first language to introduce me to type signatures. I was told that their purpose was to provide helpful compiler errors in lieu of unhelpful runtime errors but, as an inexperienced user, they mostly provided me with confusion. During my first day writing code at NoRedInk, I encountered the Html.map function while looking at a reusable view and was rather perplexed. Its type signature looks like this:


map : (a -> msg) -> Html a -> Html msg

I wasn’t quite sure what Html.map’s type signature meant by (a -> msg), nor did I understand what a or msg were supposed to be. Beyond making sense of its type signature, I had little understanding of why we needed to use Html.map in the first place.

While I wish I could report that I went home that day with a firm grasp on Html.map (and all things Elm-related really), the reality was that it took a while longer for the pieces to come together. Html.map lies at the intersection of several concepts that were new to me and I was missing too many pieces of context to understand how it worked. Before I could understand Html.map, I needed to have a grasp on type signatures, Elm architecture, and the general idea of passing around functions as arguments. However, as a brand-new Elm user, I was not yet aware that I was missing these pieces of context and felt frustrated when I didn’t immediately understand what was going on.

Luckily, I had a team of experienced Elm developers at my disposal who could point me towards a number of useful resources and learning strategies. Here are some resources and strategies that I found particularly helpful:

The Elm Tutorial: The Elm Tutorial is a great resource for beginners. Walking through the tutorial from beginning to end gave me a good high level overview of Elm architecture and provided me with pieces of context I didn’t even know I was missing. After finishing the tutorial, I felt significantly less confused about how Elm deals with user interactions and understood why a view function returns an Html msg. Working through the tutorial was also a good way to handle Elm code in a simplified context, rather than trying to understand what was going on in NoRedInk’s complex codebase.

The Elm Package Docs: Whenever I’m confused about a function or am wondering whether a function that I need exists, I consult the Elm Package docs. While they are a stellar resource, the Elm Package docs can also feel overwhelming. They contain a lot of information, and it can feel difficult to know where to start. For beginners interested in creating a basic web app, referencing the Core and HTML packages provides a good starting point. I also find it helpful to think about what type signature will solve the problem at hand and search for that type signature when looking for functions. For example, if I’m looking for a function that takes in an Int and returns a String, I can grep for Int -> String in a specific package. Like learning a new language, learning to navigate documentation takes time, and as I’ve worked more with Elm, I’ve become more confident in using the docs to look up what I need.

Drawing connections: Another strategy that has helped me is drawing connections between unfamiliar concepts and concepts that I understand well. In the case of Html.map, it was helpful to look at List.map. Just like JavaScript’s map function, Elm’s List.map requires a function and a list. It uses the function to transform each element in the list into a new element:


map : (a -> b) -> List a -> List b

A member of my team pointed out that Html.map works similarly. Instead of transforming elements of list, it transforms a msg. Drawing a connection to a concept that I did understand well helped Html.map click.

The Elm Community: As an employee at NoRedInk, I have the unique advantage of working with experienced members of the Elm community on a daily basis. If I have a question, I need look no further than the desk next to me to seek help. Being unafraid to ask “stupid” questions has also been extremely valuable to my learning process. Sometimes, there is no better way to resolve confusion than admitting that you don’t know something and asking a human being who does!

Even if you lack the convenience of having experienced Elm developers one desk over, there are still several ways to connect with the Elm community, including the Elm Subreddit, Elm’s Slack, and Elm meetup groups. In my experience, the Elm community is very friendly and wants to help you learn, whatever your background and current level of knowledge. Meander into the Elm grove and say hello!

Making Peace with Confusion

My ultimate piece of advice is to dive right in and try to build something if you’re thinking of learning Elm. There are plenty of resources out there to help you, and there is no better way to start learning a language than to… start learning it!

I’ve grown to love parts of the language that I initially found confusing and frustrating. Type signatures and compiler errors are my new best friends (along with my great new coworkers, of course). Type signatures force me to think a few moves in ahead and be more conscious of the code that I’m writing. Compiler errors tell me exactly what I’m doing wrong without forcing me to embark on the grand debugger chase-down of 2017. I’d almost call debugging my code… fun?

Sometimes, the amount of context needed to understand a seemingly simple snippet of code can feel overwhelming. It’s okay to need more context! Learning a new language isn’t about understanding everything immediately; it’s about building foundations and circling back to complex topics later. It can be frustrating not to understand concepts at first glance, but as I’ve picked up more languages, I’ve accepted confusion as a natural part of the process. I am still new to Elm and have a lot to learn, but my fear of learning the language has greatly diminished. It’s all fun from here on out! I’m excited to be working at NoRedInk and look forward to sharing more about the joy and confusion that accompany learning a new language. Interested in joining us? We’re hiring!


Brooke Angel
Engineer at NoRedInk


Swapping Engines Mid-Flight

$
0
0

A few months ago, I had the privilege of joining the product team for our First Design Sprint. Starting with a huge user pain-point, we used the Design Sprint process to arrive at a solution with a validated design (yay, classroom visits!), and a working prototype. If you’re curious about that process, I highly recommend you give that post a read. Long story short (and simplified): students practice on NoRedInk to gain mastery; the old way of calculating mastery frustrated students… a lot; the new way of calculating mastery feels much more fair.

This post is about what came after the design sprint:

We replaced the core functionality of our site with a completely new experience, without downtime, without a huge feature branch full of merge conflicts, and while backfilling 250 million records.

Actually, this post is only the first of two, in which I hope to discuss the strategies we did and didn’t use to build and deploy this feature. A future post will be a deep-dive into backfilling the 250 million rows without requiring weeks of wall time. I make no claim we did anything original or even unexpected. But, I hope reading this particular journey, and my missteps along the way, will bring together some pieces that help you in your own work.

The Omnibus Strategy

I’ve been working at NoRedInk for 4 years - back since the engineering team consisted of just a handful of us – and things have changed a lot. In the early days, when we had a big new feature we would:

  1. Start an omnibus feature branch
  2. Create feature branches off of the omnibus branch
  3. Review that feature branch, and merge it into the omnibus branch
  4. Resolve all the merge conflicts in the omnibus branch that crop up as other engineers merged code into master
    1. Then, deal with merge conflicts between the omnibus branch and any/all feature branches
  5. Keep creating, reviewing, and merging feature branches until the omnibus branch is fully featured
  6. QA the completed omnibus branch
  7. Merge the omnibus branch into master and deploy

As we added more team members, and our features got more complex, the merge conflicts became a nightmare. I had heard this could be avoided by using feature flags, but (though I’d never actually tried it) I’d decided that the resulting code complexity wasn’t worth it. Maybe I was right back when we had 3 engineers, but by the time we were 6+ - quite frankly - I was dead wrong.

The Flipper Strategy

Around year 3, we started using feature flags for large features (in particular, we use the Flipper gem) thanks to some polite prodding by the trio of Charles, Marica, and Rao. For the uninitiated, this produces code similar to the following all over your codebase:


if FeatureFlag[:new_thing].enabled?
  do_fancy_new_thing()
else
  do_old_thing()
end

As long as that feature flag is turned off, your new code has no effect. The magical win you get when you write code that doesn’t affect users is you can merge every little PR about your new feature directly into master! No extended merge conflicts. No branches off of branches. And if you’re using feature flags, you can have tests for both the new and old functionality co-exist. Plus when you’re ready, you can turn the new feature on (and back off) without a deploy.

The new approach looks like this:

  1. Start a feature branch off of master
  2. Code up a small piece of your new feature, and put that functionality behind a feature flag. Make sure the old functionality still works
  3. Review that branch as if it were any other PR, except now we need to make sure both the new functionality works and the old functionality is unchanged
  4. Merge your PR into master

It’s almost exactly the same as development-as-usual.

Side note: you don’t need feature flags to merge not-yet-released code. As long as the new functionality is disabled (e.g. if false) or no-op (e.g. writing data to an as-of-yet unused table) you’re in good shape. What feature flags give you is an easy way to toggle functionality in tests, during QA, and on production – so your “disabled” functionality can also be easily verified and tested.

Running Two Different Engines at Once

The first talk I heard about migrating between systems with a lot of usage was a talk in 2010 by Harry Heymann at Foursquare. They were moving from PostgreSQL to MongoDB while users were “checking in” ~1.6M times / day. They followed a pretty clean approach:

  1. Build the new system to run in parallel with the old system. Write to both systems, but keep reading from only the old system.
  2. Validate that new system is running as expected. At this point, we’re confident all data moving forward is good.
  3. Backfill the new system.
  4. Swap! Start reading from the new system - and you’re live!
  5. Retire the old system.

“Swap!” in our case, meant turning on the feature flag.

This seemed like the right approach. Even our usage numbers are similar – our usage today is about 5x theirs in 2010.

The key difference for us is that Foursquare had two systems that were expected to work identically, we have two systems designed to work completely differently. One example: if a student answers a question incorrectly on the site - the old system would take away 50% of her mastery points, - the new system doesn’t take away any points, but requires her to get three questions correct in a row before she can get points in the future.

So, here’s the problem. Let’s imagine Susan is doing her homework while we’re writing to both systems. At this point, the “Old System” is still what users are seeing. The following are real mastery score calculations from both systems:


| Susan       | Old System Score | New System Score |
----------------------------------------------------
| initial     |         0        |         0        |
| correct     |        20        |        20        |
| incorrect   |        10        |        20        | Scores don't match anymore !!!
| correct     |        10        |        20        |
| correct     |        30        |        20        |
| correct     |        50        |        20        |
| correct     |        70        |        40        |
| correct     |        90        |        60        |
| correct     |       100  done! |        80        |

Great! Susan is done with her homework, and she has a grade of 100. Then tomorrow, we swap to the new system. Suddenly, her grade just dropped to a 80! I’ll let you imagine how furious students and teachers would be if we let that happen.

We’re using feature flags to deploy new code right away, we’re writing to both systems just like Foursquare… I just need everything to match when we flip the feature flag.

I came up with a plan. I’d run the backfill script on the historical data and all the recent data. That way, we overwrite all “New System” data so that it would perfectly match “Old System” scores. Susan’s “New System Score” gets overwritten to be 100, and crisis averted. We’d just have to bring the site down for a couple hours on the weekend so there wouldn’t be any additional writes while the script is running.

Here’s Susan again:


| Susan       | Old System Score | New System Score |
----------------------------------------------------
| initial     |         0        |         0        |
| correct     |        20        |        20        |
| incorrect   |        10        |        20        | Scores don't match anymore !!!
| correct     |        10        |        20        |
| correct     |        30        |        20        |
| correct     |        50        |        20        |
| correct     |        70        |        40        |
| correct     |        90        |        60        |
| correct     |       100  done! |        80        |

        TAKE THE SITE DOWN FOR MAINTAINANCE

| RUN SCRIPT  |       100        |       100        | Scores match again !!!

             BRING THE SITE BACK UP

There are two problems with this. One, my estimate of “a couple hours of downtime” turned out to be wildly optimistic (I’ll talk more about how wildly in a future post). But moreover, I was solving the wrong problem: there was no reason to let the scores get out of sync to begin with…

Running Two Different Engines in Sync

Foursquare had the right idea, I’d just been applying it wrong. We needed to sync up the two datastores first, and only afterwards start using the new calculation. The key was to write to both datastores with identical values until turning on the feature flag. So, here’s the plan we actually used (changes in bold):

  1. Build the new datastore to run in parallel with the old system. Write the values from the old system to both datastores, but keep reading from only the old datastore.
  2. Validate that new system is recording the same values. At this point, we’re confident all data moving forward is good.
  3. Backfill the new system.
  4. Swap! Turn on the feature flag: start reading from the new system, and use the new calculation.
  5. Retire the old system.

Now Susie’s scores will be identical in both systems, and there’s no need to bring the site down before swapping to the new system.

In Conclusion

So what have I learned? First, be careful what lessons you take from other’s experience. And, if you think you need to take the site down to make a change, consider again very carefully.

If you notice anything I missed or got wrong, I’d love to hear about it and keep learning - please write to me and let me know. Thanks!


Josh Leven
@thejosh
Engineer at NoRedInk

A Day in the Life of a Curriculum Specialist

$
0
0

Stephanie has been a Curriculum Specialist at NoRedInk since June 2016. Before joining the company, she created literacy curriculum and assessments for a charter network in New York City. Previously, she taught middle school English in Madrid. At NoRedInk, she feels lucky to spend all day thinking deeply about how to leverage technology to support students’ development as writers.

8:45 a.m. - I arrive at the office! It’s pretty quiet at this time—a few of us like to get in early, while others may opt to work from home or to commute in mid-morning. I grab some cereal from our snack room and spend some time skimming the EdSurge newsletter. I always enjoy reading about the challenges and successes that other edtech products experience—there are often lessons we can learn vicariously!

9:15 a.m. - I sit right behind two of our designers, so I often get sneak peeks of new features they’re working on. For the past few months, our designer Becca has been gathering input from teachers and exploring some potential changes to the site’s assignment creation form. Today, she shows me and another one of our colleagues a recent mock-up of the new form. We discuss how our curriculum can be presented most helpfully so that teachers can easily determine what exercises to prioritize and locate topics that align with their state standards.

10:00 a.m. - My colleague Nellie and I meet in a room named “The Arena.” (All of our rooms are named after settings from the top student interests on the site—in this case, The Hunger Games.) We’re in the midst of designing a new “taxonomy,” our name for the scope and sequence of exercises that aims to help students master a larger skill. In this case, we’re focusing on transition words and phrases. Previously, our team researched the topic and established high-level objectives for the pathway. We also drafted sample exercises that we thought could help students achieve these objectives. Now, we’re going to take a close look at our draft and consider which topics we might want to add, cut, or alter.

On the whiteboard in The Arena, Nellie and I sketch exercises and discuss the interfaces that we think would best teach the concept. We note any new technical or design needs to share with our Product and Engineering teams.

12:00 p.m. - Every day, our Curriculum team holds “standup,” a quick, 20-minute meeting where we address issues, ask questions, and make announcements that are relevant to the whole team. One of our team members is based out of Boston, so we log into a Google Hangout so that he can join us on the monitor. Today, one topic of discussion is our upcoming classroom observation. We’ll be testing a couple of new exercises and lessons in a local school to see how helpful they are to students. Observations provide us with crucial data in our curriculum development process. For now, we check in to ensure that we’re all clear on our plan!

12:25 p.m. - It’s Thursday, so it’s a food truck day! Every Tuesday and Thursday, a different selection of food trucks park themselves right in front of our office. We pop downstairs to see what the offerings are.

1:00 p.m. - It’s time for our Support team meeting! I love answering customer support tickets because the process helps me to put myself in teachers’ and students’ shoes. Our Support team is made up of members of the Curriculum and Customer Success teams; most of us are former teachers ourselves. Every Thursday, we gather to discuss any important updates or bugs that have cropped up during the week. This week, we’re also spending some time recording teacher feedback in our “Feature Requests Log.” Whenever a teacher or student makes a suggestion, we record it so that we can identify trends and provide helpful context to our Product team as they consider improvements to the site. Today, we’re logging teacher feedback that we collected during live professional development sessions. We’re happy when we notice that we already have projects in the works to address many of teachers’ concerns, but we also spot some great new suggestions.

2:00 p.m. - We’ve enlisted the help of our user researcher, Christa, to dig into the data on a learning pathway that we released a couple months ago: Topic Sentences. We’re eager to determine which topics in the pathway students have found easiest and most challenging, and whether these results align with our expectations. We’ll use the data to identify any outliers and make adjustments accordingly.

2:30 p.m. - I love pairing with team members on projects, but to build curriculum, independent work time is also essential. Today, I’m working on “approvals” for our Claims, Evidence, and Reasoning learning pathway. This means that I’ll review all the questions our team has written before they go live on the site. I’ll consider: Are there any typos? Is the writing high quality? Does each question teach the objective we set for this topic? Are the questions fair? Is the subject matter engaging? I grab The Chicago Manual of Style to look up a rule about using hyphens, and I leaf through The Book Thief to double-check a quote.

4:00 p.m. - Next, our team begins a final “Content QA.” We each log in and explore the pathway from a student’s perspective, answering questions correctly and incorrectly. We evaluate whether the flow of topics makes sense and whether the lessons are helpful. Seeing the questions live on the site is also a great way to spot any bigger-picture gaps we may have overlooked earlier in the process when we were focusing intensely on the details.

5:30 p.m. - I grab my jacket and join the group of NoRedInkers gathering by the door. Every five weeks, we hold a book club. These meetings usually include pizza, laughter, and thoughtful discussion. It’s always a pleasure to hear others’ perspectives and spend non-work time together. I can’t wait to discuss this month’s pick!

Stephanie Wye

Curriculum Specialist at

NoRedInk

We’re hiring! Check out our job postings at https://www.noredink.com/careers

Ruby Threading: Some Practical Lessons

$
0
0

I was recently working on backfilling about 250,000,000 rows of data. As you may have read in Jocey’s post about our First Design Sprint, we were in the process of swapping out our old mastery experience for a brand new one. The rake task I initially wrote to do the backfill was far too slow, and I needed to find ways to speed it up. Last month I wrote a post about swapping out mastery engines and next month I’ll be posting one about backfilling the data – but in the process I ran into a few surprises specifically around threading in Ruby. This post is my attempt to keep you from bumping into those same mistakes.

First of all, if you are looking to parallelize a rake task or background job, managing threads by hand is probably not the best solution. This approach is often more complex than the alternatives, as it requires you to manage the life cycle of each thread, their coordination, and their access to shared state. In most cases, I’d recommend using a background job tool like Resque (forking), or Sidekiq (threaded). But, if you’re dead-set on managing your own thread pool in Ruby, there are a few things you should know.

When Ruby Threads Are Helpful

Threads do have a few advantages:

  • Threading can save you a lot of memory over processes. Multiple threads will share a single instance of a Rails application. Multiple processes each need their own copy. If you want to run 10 threads, and your application uses 50MB of memory: threads will save you upwards of 450MB of memory. Not too shabby.
  • Threads give you a lot of control over their execution. While multiple processes are scheduled primarily by your OS, threads can be orchestrated by you and your program.

If you are using MRI Ruby, there is one extra thing to consider: the infamous Global Interpreter Lock (GIL). Many things have been written on the GIL, and I encourage you to dive in deeper. But for now, in a nutshell, the GIL means that:

No matter how many threads you have, and no matter how many cores your computer has – at any given moment only one thread on one core will ever be running within your Ruby process.

So, if you have a complicated calculation to perform, dividing that calculation up amongst many threads wins you… nothing. A single CPU core will be responsible for all the work, only one thread will be running at any given time, pretty much the same as if you had written your calculation to be single threaded. However, if you have a long-running task which is frequently waiting on an external service (e.g. MySQL queries), it’s a bit of a different story.

Let’s say you have thousands upon thousands of SQL queries to run. In Ruby, when one thread is waiting on a response from the database, that thread will yield control to another thread. That second thread can then assemble and perform a different query, and start waiting on its response. Maybe, at this point, the first thread has received a response from the database and continues on its merry way.

In my case, the queries I needed to run were expensive and the application was spending ~50% of the time waiting on the database. This is a great candidate for speeding up using threading in Ruby. With multiple threads, we can have one thread doing work while another is waiting for the database.

Side notes:

  1. The GIL is a lock. The currently running thread holds that lock and no other thread can run until it releases the lock.
  2. When an MRI Ruby thread wants to do any IO, it actually calls out to the kernel to perform that IO. In kernel-space the GIL doesn’t apply! Ruby releases the GIL as soon as the request has been sent to the kernel, at which point another thread can run.
  3. JRuby and Rubinius do not have a GIL, then ruby threads are more broadly useful. E.g. unlike on MRI, on these platforms, threading can be used to exploit multiple cores.

Adding Threads

To add threading we need to:

  1. Have a way to distribute our problem between the threads
  2. Create and run each of the threads

There are a few different ways to divide up a problem between threads. If you’re familiar with background jobs, then you’ve seen the use of a queue where each worker pulls its next job off of that queue.

Side note, if multiple threads are accessing the same queue, you need a queue which is thread-safe so that those threads don’t step on each other’s toes. Thread safety is a great topic, but I’ll leave it to other blog posts like this one.

In my case we were iterating through a long list of user ids, so I can avoid worrying about thread safety by dividing up those user ids amongst each of the threads in advance. Each thread manages its own list of ids and nothing is shared between threads.

Creating a Thread in Ruby is surprisingly easy:


Thread.new {
  print "I'm running in a thread. Woohoo!!"
}

However, as we’ll see, there are quite a few gotchas to be aware of. The first one you see in any Ruby threading tutorial: your program will happily exit even if your threads haven’t finished. If you want the program to wait for all threads to finish, it’s up to you to say so. For example, this code:


5.times do |i|
  Thread.new {
    sleep(1)
    print "I'm running in thread #{i}. Woohoo!!"
  }
end
print "All done, exiting"

will produce the following output:


All done, exiting

The main thread (your program) creates each thread. All the threads start, and they will each start sleeping. But before they get to their print statements the main thread prints “All done, exiting” and exits. And when the main thread exits, all threads it created are killed as well.

The key is to join each thread before moving on. The Thread#join function forces the current thread to wait until that thread passes control back (either by exiting, or explicitly calling a function like Thread.stop).

Collect all the threads, call join on each, and we get the output we want:

5.times do |i| threads.push Thread.new { sleep(1) print “I’m awake in thread #{i}. Woohoo!!” } end

threads.each { |thread| thread.join }

print "All done, exiting"

will produce the following output:


I'm awake in thread 1. Woohoo!!
I'm awake in thread 2. Woohoo!!
I'm awake in thread 5. Woohoo!!
I'm awake in thread 3. Woohoo!!
I'm awake in thread 4. Woohoo!!
All done, exiting

The order is non-deterministic, but the “sleeping” threads are guaranteed to finish before the main thread.

So lets take a look at that rake task I want to speed up. Here’s the script before threading:

task :sync_mastery_scores, [:start_id, :max_id] => :environment do |_, args| ids = ( args[:start_id] .. args[:max_id] )

ids.step(BATCH_SIZE) do |first_id|

last_id = first_id + BATCH_SIZE
Mastery.convert_old_to_new!( first_id, last_id )

end end

Here’s the script all ready for threading:

task :sync_mastery_scores, [:start_id, :max_id] => :environment do |_, args| ids = ( args[:start_id] .. args[:max_id] )

thread_each(n_threads: N_THREADS, ids: ids, batch_size: BATCH_SIZE) do |first_id, last_id|

Mastery.convert_old_to_new!( first_id, last_id )

end end

Notice the script has barely changed. The lines:


ids.step(BATCH_SIZE) do |first_id|
  last_id = first_id + BATCH_SIZE
  ...
end

Have been replaced with:


thread_each(n_threads: 2, ids: ids, batch_size:BATCH_SIZE) do |first_id, last_id|
  ...
end

This new thread_each function needs to divide up ids into separate zones of ids, one for each thread.

(0…n_threads).each do |thread_idx| thread_first_id = ids.first + (thread_idx * ids_per_thread) thread_last_id = thread_first_id + ids_per_thread

thread_ids = (thread_first_id...thread_last_id)

# ...
# Start a Thread and iterate through `thread_ids`
# ...

end end

We can fill in that last section by creating a Thread, and having the thread loop through its thread_ids in batches, passing each batch into the block. Just don’t forget to join all the threads at the end!

ids_per_thread = (ids.size / n_threads.to_f).ceil

(0…n_threads).each do |thread_idx| thread_first_id = ids.first + (thread_idx * ids_per_thread) thread_last_id = thread_first_id + ids_per_thread

thread_ids = (thread_first_id...thread_last_id)

threads.append Thread.new {
  puts "Thread #{thread_idx} | Starting!"

  thread_ids.step(batch_size) do |id|
    block.call(id, id + batch_size - 1)
  end

  puts "Thread #{thread_idx} | Complete!"
}

end

threads.each { |t| t.join } # wait for all the Threads to complete end

By the way, I tried a few different values of n_threads and found that 2 gave the best performance. Your mileage may vary.

So actually… this code is close to working, but it turns out there are a few problems with it.

Thread Gotchas

NameError ?

The first time I ran the threaded script, I saw all sorts of bizarre errors like:

NameError: uninitialized constant StudentTopic

This is because Rails 3.x is not thread-safe by default. (Rails 4+ is, and thankfully we’ll be upgraded to Rails 4 very soon.) There is a config setting to make Rails thread safe, but using that would be too easy a solution for this post 😉. The key is to make sure all files that you need are loaded before creating any of the threads – even files which are dependencies of the ones you need directly. In this case, Mastery requires Student. So, at the top of the thread_each function, before creating any threads, I added:

end

1 + 1 > 2 ?

When I ran it again, it seemed like everything worked great! Until exactly 50% of the way done:

New Mastery Records: |======== | 50.00% rake aborted! ActiveRecord::ConnectionTimeoutError: could not obtain a database connection within 5 seconds ... ___/active_record/connection_adapters/abstract/connection_pool.rb:258:in `block (2 levels) in checkout' . . .

We have two problems here. The first is that the connection pool only has 2 connections in it, and for some reason I need more than 2 - even though I only have 2 threads running! The truth is, I have three threads running – the two we create in thread_each, and the main thread that creates them. If the main thread grabs a connection from the connection pool, then there’s only 1 connection left in the pool for the other two threads. Oh noes!!!

(You may be thinking - Josh, in the script you’ve been showing us, the main thread doesn’t access any connections – and you’d be right. I’ve simplified things a little for the sake of this blog post. In the real script, the main thread executed a couple queries before starting up the threads.)

We could increase the size of the connection pool, it’s just a config value in database.yml. We could create our own ConnectionPool for use by this script. Or, we could have the main thread return its connection to the pool when it’s done using it. Since “releasing the connection from the main thread” is a one-line change and the change is local to the script I’m working on, that’s the option I chose. Here’s the one line, be sure to add it before creating the other threads:

ActiveRecord::Base.connection_pool.release_connection

... end

50% ?

Okay! But there’s still the question – why did the script fail at exactly 50% done? Well, actually, it sort of didn’t.

When we created the two threads, the first one attempted to execute a query using ActiveRecord, grabbed the last connection from the connection pool, and carried along its merry way. Then the second thread came right along and tried to execute a query using ActiveRecord, but failed to get a connection from the pool, and immediately failed. The thing is, that second thread didn’t tell the main thread that it failed until the main thread called join on it.

The main thread created the two threads. The first one works great, the second one fails almost immediately. Then the main thread calls join on the first thread and waits until the first thread is complete – which happens when we are exactly 50% done with the script!!! At that point, the main thread gets control back and calls join on the second thread. And that’s when the second thread finally tells the main thread that it has failed.

Well, that’s pretty frustrating! However, again, there is a simple solution. You can instruct a thread to abort on an exception right away, instead of waiting for a join. We just need to add one more line to our thread_each function right before calling join:

threads.each { |t| t.abort_on_exception = true } # make sure the whole program fails if one thread fails threads.each { |t| t.join } # wait for all the Threads to complete end

And with that, here’s the final script end to end:

task :sync_mastery_scores, [:start_id, :max_id] => :environment do |_, args| ids = ( args[:start_id] .. args[:max_id] )

thread_each(n_threads: N_THREADS, ids: ids, batch_size: BATCH_SIZE) do |first_id, last_id|

Mastery.convert_old_to_new!( first_id, last_id )

end end

def thread_each(n_threads:, ids:, batch_size:, &block) PRELOAD = [ Mastery, Student ]

ActiveRecord::Base.connection_pool.release_connection

threads = []

ids_per_thread = (ids.size / n_threads.to_f).ceil

(0…n_threads).each do |thread_idx| thread_first_id = ids.first + (thread_idx * ids_per_thread) thread_last_id = thread_first_id + ids_per_thread

thread_ids = (thread_first_id...thread_last_id)

threads.append Thread.new {
  puts "Thread #{thread_idx} | Starting!"

  thread_ids.step(batch_size) do |id|
    block.call(id, id + batch_size - 1)
  end

  puts "Thread #{thread_idx} | Complete!"
}

end

threads.each { |t| t.abort_on_exception = true } # make sure the whole program fails if one thread fails threads.each { |t| t.join } # wait for all the Threads to complete end

Caveats

I went the route of custom threading because I had a tight deadline I was trying to hit.

In general, this sort of problem/solution:

  • is not time sensitive
  • doesn’t need access to shared state

Which means it’s a prime candidate for using background jobs. If I were to use Sidekiq, I’d even get the memory efficiency benefits that I get with raw threads.

There are lots of problems that are a great fit specifically for threading. Specifically those that

  • are time sensitive OR
  • need access to shared state

For example:

  • data processing that must be run during a request, I can use threads to return results sooner
  • background job’s queue is deep and I want to get something done immediately

In Conclusion

So those are a few things that tripped me up with Ruby threading – I hope they help. In an upcoming blog post, I’m excited to go into more of the swapping-out-mastery-engines journey with you, so keep an eye out for that. If you notice anything I missed along the way, I’d love to hear about it and keep learning - please write to me and let me know. Thanks!


Josh Leven


@thejosh


Engineer at

NoRedInk

New! Sharable assignments, SAT/ACT passages, exercises on argumentation

$
0
0

We’ve had some exciting new releases in the past few weeks! Here’s a recap:

New Free Features

Sharable Assignments

Teachers can now share a link to any assignment with their departments or grade-level teams. Simply click the “…” icon next to the assignment name, then “Share with Other Teachers.”

When other teachers click the link, they’ll see a copy of the original assignment that they can then customize and adjust!

Exercises on Claims, Evidence, and Reasoning

Our new Claims, Evidence, and Reasoning pathway is available for free through the end of July! To create an assignment, go to the assignment form, and click “Writing” and “Isolated Practice.”

These exercises coach students on how to evaluate and create powerful, logical, evidence-based arguments.

New Premium Feature

ACT/SAT Passages

NoRedInk now offers 12 passages specifically designed to help your students prepare for the ACT and SAT! These passages include the types of errors students will be asked to correct on test day.

To assign a passage, follow these steps:

  • Click “Quiz,” then select “New Quiz.”

  • Click “Select an ACT/SAT Passage.”

  • Choose a passage to assign!

Designing for Teachers: User-driven Information Architecture

$
0
0

It’s not breaking news that teachers are using technology in their classrooms more than ever. Public schools in the US now provide at least 1 computer for every 5 students and spend more than $3 billion per year on digital content. With their already packed schedules, teachers don’t have time to figure out websites and apps that are complicated and unintuitive. A key feature determining whether using a website feels simple and easy is the site’s information architecture, or IA. IA is the underlying structure that defines how the contents of a website (or any repository of information) are classified and organized.

Good IA goes unnoticed, allowing the user to navigate the site and find what they are looking for without a second thought. Bad IA makes itself obvious, and can often be the culprit of a frustrating user experience. My local supermarket, for example, continues to baffle me in the way that its goods are organized. On a hunt for peanut butter, I see the jelly and think to myself, “I must be getting close.” But alas, it’s hiding 4 aisles down, next to the olive oil, inexplicably.

This summer at NoRedInk, the product team embarked on a project to redesign the information architecture of the teacher side of the website. We hadn’t audited the IA since its launch in 2012, and we wanted to ensure that creating an assignment and viewing student results were as easy as finding the peanut butter next to the jelly. As with everything we do, the project focused heavily on user research. We utilized a variety of methods to get to the core problems with our IA and evaluate potential solutions, resulting in a final product we think teachers will find welcoming and intuitive when they come to NoRedInk this fall.

Phase 1: Gathering and synthesizing teacher feedback related to IA

The first step was to examine where our current IA wasn’t working well. We spoke to members of our Support, Customer Success, and Partnerships teams about feedback they’ve collected from teachers regarding usage challenges on the site. These teams interact with teachers every day, responding to support emails, conducting professional development training, and giving demos of the site, and they had great insights about common navigation pitfalls on the website. For example, the Support team tracks all the emails we get from teachers about specific problems or requests. The second most common issue reported this past school year was not being able to add new students to existing assignments - a problem we knew could be fixed with better IA.

We then conducted interviews with teachers who had recently signed up for NoRedInk in order to understand which aspects of teacher functionality were easy to do right way, and which parts of the site were more likely to go unnoticed. We learned that a few key aspects of NoRedInk - the different types of assignments we offer and the ability to track students’ mastery levels - weren’t always immediately clear to teachers in their initial experiences on NoRedInk.

Phase 2: Card Sorting

Once we knew the major problems with our current IA, we started to design solutions. Instead of building off the existing model, we wanted to give ourselves the freedom to start from scratch. So we began by listing out all of the teacher-facing pages on NoRedInk and experimenting with new ways of organizing the pages. Using a method called card sorting, we had teachers do the same. Card sorting is a tool that helps uncover the way users intuitively group and categorize the pages and functions on a website. The user is presented with a long list of the website’s contents, like “Preview an assignment,” and asked to sort them into categories and give each category a name. We recruited teachers who had never used NoRedInk to avoid bias from familiarity with the current structure. The card sorting tests revealed that participants largely agreed on the overarching categories on NoRedInk: Assignments, Student Performance, Classes, Settings, and Instructional Resources. From there, we had to drill down into the finer details of where more specific functions would be found and what to name them.

Optimal Workshop, the tool we used for card sorting, analyzes the results from each participant and quantifies how frequently cards were sorted into the same category.

We took what we learned from teacher interviews, support data, and card sorting to the drawing board, and each member of the product team mapped out some new structures. We had a brainstorming meeting in which we taped hard copies of the sitemaps up on the wall and went around with stickers to mark the ideas we liked the most.

Ideas of new sitemaps from our team brainstorm.

Phase 3: Tree Testing

Our brilliant designer Ben synthesized all of these ideas into two new versions of the IA: one that was more similar to the existing site, and one that was more “class-centric” - using a teacher’s classes as jumping off points to other parts of the site. We used a method called tree testing to evaluate whether the new versions made things easier to find compared to the existing IA. In a tree test, the user is presented with a hierarchical list representing the contents of a website and several tasks; the user clicks through the list and selects the places where they think they’d be able to complete the tasks.

A screenshot of one of the tasks in the tree test. Based on the feedback we heard in our initial research, we wanted to make sure that teachers could find where to add new students.

The data we collected from tree testing included where the participants expected to complete the tasks, the paths they took to get there, and how long they spent looking. We conducted several rounds of tree testing with participants who had never used NoRedInk before. After each round of testing we made changes to address places where participants were still having trouble. Sometimes we simply renamed a feature, like changing “Student Leaderboard” to “Top Performers.” Other times we changed the location of a feature, or added another way to navigate to it. All in all, we tested 7 different iterations until we came to a version that nearly all participants completed correctly and quickly.

Phase 4: New IA! Final Design and Validation

Ben transformed the final version of the IA we developed during tree testing into a beautiful new design for the teacher side of NoRedInk. The updated layout features a new menu bar with some renamed pages. “Lessons”, for example, became “Curriculum,” a clearinghouse for our scaffolded pathways, lessons, and tutorials designed to address a pain point we frequently encountered during our research: many teachers weren’t aware of the full breadth of curriculum available to them on NoRedInk. We also added a prominent sidebar menu where teachers can manage their class settings, including student rosters. The biggest change in the new IA is the class-centric teacher dashboard, where teachers can view their classes, see what’s upcoming for the week and how students are progressing on assignments. We knew from our research that those were things that teachers want to see right away, and we organized them front and center so teachers can jump quickly into assignments or student data being better informed about the current state of their classes.

To validate the new design, we tested a working prototype to see whether the real layout, compared to the more artificial layout in the tree test, was still just as easy to navigate. We tested with new users who had very little experience on the site and with NoRedInk Ambassadors, who use the site regularly. The feedback we got from both groups was hugely positive, with multiple teachers using the word “streamlined” - exactly what we were going for.

Our current dashboard (left) and the new design, not yet in production (right).

What we learned

Looking back, the most important source of information was teacher feedback, via the Support and Customer Success teams and directly through interviews. That feedback heavily influenced the solutions we designed, and tree testing was a great tool to fine-tune and validate them. Card sorting, though a common and logical place to start when it comes to IA, didn’t tell us much beyond what we already knew. A better way to start might have been to brainstorm creative ways of getting teacher feedback related to IA that eventually drove our final solution. We’re really excited to release this more straightforward, user-friendly IA to teachers this fall!

At NoRedInk, our product team is deeply user-driven, and we are consistently pushing ourselves to find even better ways of getting feedback from teachers and students. If you’re passionate about building a product that teachers really want, our team is hiring— we’d love to hear from you!

Christa Simone is a User Researcher at NoRedInk, leveraging research and data to help build a product that teachers love.

Decoding Decoders

$
0
0

Introduction

This post is written for an Elm-y audience, but might be of interest to other developers too. We’re diving into defining clear application boundaries, so if you’re a believer in miscellaneous middleware and think DRY principles sometimes lead people astray, you may enjoy reading.

Obviously-correct decoders can play a primary role in supporting a changing backend API. Writing very simple decoders pushes transformations on incoming data into a separate function, creating a boundary between backend and frontend representations of the data. This boundary makes it possible to modify server data and Elm application modeling independently.

Decoders

In Elm, Decoders & Encoders provide the way to translate into and from Elm values. Elm is type safe, and it achieves this safety in a dynamic world by strictly defining one-to-one JSON translations.

An example inspired by NoRedInk’s Writing platform follows. We ask students to highlight the claim, evidence, and reasoning of a paragraph in exercises, in their peers’ work, and in their own writing; we need to be able to encode, persist, and decode the highlighting work that students submit.


import Json.Decode exposing (..)
import Json.Decode.Pipeline exposing (..) -- This is the package NoRedInk/elm-decode-pipeline


{-| HighlightedText describes the "shape" of the data we're producing.

`HighlightedText` is also a constructor. We can make a HighlightedText-type record by
giving HighlightedText a Maybe String followed by a String--this is actually how decoders work
and the reason that decoding is order-dependent.
-}
type alias HighlightedText =
    { highlighted : Maybe String
    , text : String
    }

{-| This decoder can be used to translate from JSON,
like {"highlighted": "Claim", "text": "Some highlighted content.."},
into Elm values:

    { highlighted = Just "Claim",
    , text = "Some highlighted content..."
    }
-}
decodeHighlightedText : Decoder HighlightedText
decodeHighlightedText =
    decode HighlightedText
        |> required "highlighted" (nullable string)
        |> required "text" string

How do we create our model?

We’ve now decoded our incoming data but we haven’t decided yet how it’s going to live in our model. How do we turn this data into a model?

If we directly use the JSON representation of our data in our model then we’re losing out on the opportunity to think about the best design of our model. Carefully designing your model has some clear advantages: you can make impossible states impossible, prevent bugs, and reduce your test burden.

Suppose, for instance, that we want to leverage the type system as we display what is/isn’t highlighted. Specifically, there are three possible kinds of highlighting: we might highlight the “Claim”, the “Evidence”, or the “Reasoning” of a particular piece of writing. Here’s our desired modeling:


type alias Model =
    { writing : List Chunk
    }


type Chunk
    = Claim String
    | Evidence String
    | Reasoning String
    | Plain String

So now that we’ve carefully designed our Model, why don’t we decode straight into it? Let’s try to write a single combined decoder/initializer for this and see what happens.


import Model exposing (Chunk(..), Model)
import Json.Decode exposing (..)
import Json.Decode.Pipeline exposing (..)


decoder : Decoder Model
decoder =
    decode Model
        |> required "highlightedWriting" (list decodeChunk)


decodeChunk : Decoder Chunk
decodeChunk =
    let
        asResult : Maybe String -> String -> Decoder Chunk
        asResult highlighted value =
            toChunkConstructor highlighted value
    in
        decode asResult
            |> required "highlighted" (nullable string)
            |> required "text" string
            |> resolve


toChunkConstructor : Maybe String -> String -> Decoder Chunk
toChunkConstructor maybeString text =
    case maybeString of
        Just "Claim" ->
            succeed 
            succeed 
            succeed 
            succeed 
            fail 

The decodeChunk logic isn’t terrible right now, but the possibility for future hard-to-maintain complexity is certainly there. The model we’re working with has a single field, and the highlighted data itself is simple. What happens if we have another data set that we want to use in conjunction with the highlighted text? Maybe we have a list of students with ids and the highlights may have been done by different students, and we want to combine the highlights with the students… It’s not impossible, but it’s not as straightforward as we might want.

So let’s try a different strategy and do as little work as possible in our decoders. Instead of decoding straight into our Model we’ll decode into a type that resembles the original JSON as closely as possible, a type which at NoRedInk we usually call Flags.


import Json.Decode exposing (..)
import Json.Decode.Pipeline exposing (..)


type alias Flags =
    { highlightedWriting : List HighlightedText
    }


decoder : Decoder Flags
decoder =
    decode Flags
        |> required "highlightedWriting" (list decodeHighlightedText)


type alias HighlightedText =
    { highlighted : Maybe String
    , text : String
    }


decodeHighlightedText : Decoder HighlightedText
decodeHighlightedText =
    decode HighlightedText
        |> required "highlighted" (nullable string)
        |> required "text" string

Note that HighlightedText should only be used as a “Flags” concept. There might be other places in the code that need a similar type but we’ll create a separate alias in those places. This enforces the boundary between the Flags module and the rest of the application: sometimes it’s tempting to “DRY” up code by keeping type aliases in common across files, but this becomes confusing because it ties together modules that have nothing to do with one another if the data that we’re describing differs in purpose. Internal Flags types ought to describe the shape of the JSON. Type aliases used in the Model ought to be the best representation available for application state. Conflating the types that represent these two distinct ideas may eliminate code, but also eliminates some clarity.

We’re not home yet. We now have a Flags type but we’d really like a Model. Let’s write an initializer to bridge that divide.


import Json.Decode exposing (..)
import Json.Decode.Pipeline exposing (..)


{- FLAGS -}

type alias Flags =
    { highlightedWriting : List HighlightedText
    }


decoder : Decoder Flags
decoder =
    decode Flags
        |> required "highlightedWriting" (list decodeHighlightedText)


type alias HighlightedText =
    { highlighted : Maybe String
    , text : String
    }


decodeHighlightedText : Decoder HighlightedText
decodeHighlightedText =
    decode HighlightedText
        |> required "highlighted" (nullable string)
        |> required "text" string


{- MODEL -}

type alias Model =
    { writing : List Chunk
    }


type Chunk
    = Claim String
    | Evidence String
    | Reasoning String
    | Plain String


{- CREATING A MODEL -}


init : Flags -> Model
init flags =
    { writing = List.map initChunk flags.highlightedWriting
    }


initChunk : HighlightedText -> Chunk
initChunk { highlighted, text } =
    text
        |> case highlighted of
            Just "Claim" ->
                Claim

            Just "Evidence" ->
                Evidence

            Just "Reasoning" ->
                Reasoning

            Just otherString ->
                -- For now, let's default to Plain
                Plain

            Nothing ->
                Plain

We’re still doing the same transformation as before but it’s easier to trace data through the initialization path now: We decode JSON to Flags using a very simple decoder and then Flags to Model using an init function with a type that actually shows what transformation is happening. Plus, as we’ll see in the next section, we have more control and flexibility in how we handle the boundary of our Elm application!

Leveraging Decoders

The example code we’ve been using involves modeling a paragraph with three different kinds of highlights. This example is actually motivated by a piece of NoRedInk’s Writing product, in which students highlight the component parts of their own writing. Earlier this year, students were only ever asked to highlight the Claim, Evidence, and Reasoning of paragraph-length submissions. This quarter, we’ve worked to expand that functionality in order to support exercises on writing and recognizing good transitions; on embedding evidence; on identifying speaker, listener, and plot context; and more. But uh-oh–our Writing system assumed that we’d only ever be highlighting the Claim, Evidence, and Reasoning of a paragraph! We’d been storing JSON blobs with strings like “claim” in them as our writing samples!

So what did this mean for us?

  1. We needed to store our JSON blobs in a new format–the existing format was too tightly-tied to Claim, Evidence, and Reasoning
  2. We needed to migrate our existing JSON blobs to the new format
  3. We needed to support reading both formats at the same time

In a world where the frontend application has a strict edge between JSON values and Elm values and a strict edge between Elm values and the Model, this is straightforward.


import Json.Decode exposing (..)
import Json.Decode.Pipeline exposing (..)


type alias Flags =
    { highlightedWriting : List HighlightedText
    }


{-| This decoder supports the old and the new formats.
-}
decoder : Decoder Flags
decoder =
    decode Flags
        |> custom (oneOf [ paragraphContent, deprecatedParagraphContent ])


type alias HighlightedText =
    { highlighted : Maybe String
    , text : String
    }


paragraphContent : Decoder (List HighlightedText)
paragraphContent =
    {- We've skipped including the actual decoder in order to emphasize
       that we are easily supporting two radically different JSON blob
       formats--it doesn't actually matter what the internals of those blobs are!
    -}
    field "newVersionOfHighlightedWriting" (succeed [])


deprecatedParagraphContent : Decoder (List HighlightedText)
deprecatedParagraphContent =
    field "highlightedWriting" (list deprecatedDecodeHighlightedText)


deprecatedDecodeHighlightedText : Decoder HighlightedText
deprecatedDecodeHighlightedText =
    decode HighlightedText
        |> required "highlighted" (nullable string)
        |> required "text" string

Conclusion

As we’ve seen, it’s easier to reason about data when each transformation of the data is done independently, and using decoders well can help us handle the intermediate modeling moments that are common in software development.

We hope that you’re interested in how NoRedInk’s Writing platform works: We’ve loved working on it and we hope you’ll ask us about it! We’ve gotten to work with some really cool tools and to try out cool architectural patterns (hiii event log strategy with Elm), all while building a pedagogically sound product of which we’re proud. In the meantime, may your modules have clean APIs, your editor run elm-format on save, and your internet be fast.


Tessa Kelly
@t_kelly9
Engineer at NoRedInk


Jasper Woudenberg
@jasperwoudnberg
Engineer at NoRedInk

New! Updated Assignment Form, New Pre-made Diagnostics, and Easier Class Management

$
0
0

Welcome to the 2017-2018 school year! We’ve made some big updates this summer.

New Free Features

Assignment Form

We’ve streamlined the assignment creation process into 3 core steps: pick the type of assignment, select the content, and handle the logistics. Our simplified form makes it faster to get work to your students!

Pre-made Diagnostics

Not sure where to start? Try one of our premade planning diagnostics! The diagnostics select standards-aligned, grade-level appropriate content to get your students started. Once you have student data, you can decide what to teach next.

Here are sample diagnostics for grades 4-6, grades 7-9, and grades 10-12. You can browse our full library of pre-made diagnsotics, including diagnostics specifically aligned to state assessments, at this link.

Class Management

Our new class management page is your central hub for controlling your courses and rosters.


New! Interactive lessons, view data as you assign, and reuse past assignments

$
0
0

We’re excited to announce more back-to-school updates to help support you and your students this year!

New Free Features

Interactive Lessons

We’ve rolled out our first batch of interactive lessons, which will introduce students to concepts prior to the start of practice. These lessons include friendly visuals, guided instruction, and targeted tips to set students up for success!

To try out an interactive lesson, go to your Curriculum page, scroll to “Who and Whom,” and then click “Practice this!”

View Data in the Assignment Form

Have student performance data at your fingertips as you create assignments. In the assignment form, expand your student roster to see up-to-date mastery and assessment data. Leverage this data to differentiate assignments for individual students or groups of students.

Reuse Past Assignments

Have assignments from last school year that you loved? On your assignments page, select to view “My archived classes.” You’ll then have the option to share or reuse work from prior classes. Learn more.

New! Curriculum and updates to gradebook, assignments page, and site colors!

$
0
0

We’ve done some cleanup and adjustments to make NoRedInk even easier to use!

New Premium Features

New Exercises on Transitions and Embedding Evidence

We’ve released new pathways focused on “Transition Words and Phrases” and “Avoiding Plagiarism and Using Citations.” Students can develop skills around producing a logical flow of ideas, as well as skills related to paraphrasing, citation, and plagiarism detection.

All topics are available as part of NoRedInk Writing! Free teachers can also try out a topic in each pathway.

New Free Features

Updated Gradebook

Our new gradebook is easier to scan, sort, and export! Learn about the full update here.

Updated Assignments Page

Quickly scan your in-progress, past-due, and upcoming assignments. Take advantage of our prompts to create growth quizzes or other new assignments for your students.

Updated Colors

Our colors got a facelift! We heard from teachers and students that our use of purple during level 1 of mastery could be discouraging or confusing – we’ve updated the colors to be brighter, friendlier, and clearer for your students.

New! “Create a unit” and improved search

$
0
0

Create a Unit

Quickly and easily build a unit of assignments! Start with a Unit Diagnostic and then add on a Practice and a Growth Quiz with a single click. This is a great way to track student growth and ensure skill development.

You’ll see the “create unit” button on your assignments page. You can also check out this Help Center article article for more information!

Improved Search

We’ve improved the searchability of our assignment form to make it easier for teachers to find what they’re looking for!

Accidental abstractions

$
0
0

Sometimes we create abstractions in our code without even realizing it. These might turn out to be very useful but more often will come back to haunt us. In this post we’re going to look at one example of such an abstraction in Elm and how we can improve upon it.

Introduction

Suppose we’re working on a brand new social network called MyNemesis. MyNemesis grew out of frustration with existing social networks eating so much of our time. We’re following so many people sharing so many stories that it’s simply too much. MyNemesis addresses this with its central premise: you can have many friends but only a single nemesis.

Abstracting by accident

We’re going to implement a core functionality of MyNemesis: the profile card. These are its requirements:

  1. Anonymous users looking at a profile card should be able to see a name, avatar, and short bio.
  2. Logged in users looking at a profile card should additionally see a button allowing them to send the profile card’s owner a nemesis request.
  3. Logged in users looking at their own profile card should see a button to break up with their current nemesis, if they have one.
  4. Logged in users looking at the profile card of their nemesis should see a button labeled ‘schedule a show down’. When clicked, it should open up a date picker.

Got it? Let’s get to work! In the grand Elm tradition lets start by designing a model for this profile card.

type alias Profile =
    { user : User
    , loggedInAs : Maybe User
    , showDownDate : Maybe DatePicker
    }


type alias User =
    { id : UserId
    , name : String
    , avatar : Url
    , nemesis : Maybe UserId
    }

Voilà, that was easy! It looks like this solution can do everything we want. We have the data both of the profile card owner and the currently logged in user, if one exists. That takes care of requirements 1 through 3. Then we have some state for the date picker necessary to implement requirement 4. We’re done here, let’s go play some Fussball.

But, wait, before we do that, let’s quickly check in with our future selves. In the future, just after the site’s 1.000.000th show down ended in a cliffhanger, we revisit the profile card to add a new feature. Anonymous users should see a banner on a profile card saying “Create an account to make {name} your nemesis!”. We haven’t touched this code for a while so we start by going to the site as an anonymous user and looking at the current profile card. Then we look at the Model and get confused. The user field on it makes sense, but what are those loggedInAs and showDownDate fields about? For present us it’s clear that this is data used by other functionality. For future us it’s extra context that needs to be absorbed, only to find out afterwards it’s not actually relevant for the task at hand.

Intermezzo: do we actually have a problem?

I think it’s fair to ask at this point if we actually have a problem. Sure, those two fields on the model aren’t relevant for anonymous users, but it’s easy to learn what they’re for. We can come up with some better field names or write a little bit of documentation. I’ll still argue that although this is so far a relatively small problem it is one we could have avoided.

It turns out we unconsciously created an abstraction: that of 'The Profile Card’, where in reality there’s many different profile cards. Unconsciously we’ve made the decision to create a type to support all features, instead of creating separate types for the individual profile cards that exist.

Secondly, it’s not a very good abstraction. Good abstractions lighten our mental load, by using them we hide some details allowing us to focus on the bigger picture. This abstraction does the opposite, using it requires us to know extra details about use cases we’re not interested in. This is a problem that will grow because we have now set the expectation that new profile cards should make use of this generic profile card implementation. Hence every new feature added in any profile card will make all other profile cards more complex.

Let’s be explicit

Let’s make our different uses cases explicit by changing our types! That should serve both as excellent documentation for everyone new to code and will allow the compiler to prevent us from making mistakes.

type Profile
    = AsLoggedOut User
    | AsLoggedIn User
    | OfOwn User
    | OfNemesis
        { user : User
        , showDownDate : Maybe DatePicker
        }


view : Profile -> Html Msg
view profile =
    case profile of
        AsLoggedOut user ->
            ...

That’s so much better! We’ve now explicitly drawn attention to the fact that these different profiles exist, allowing others to zoom in on the case relevant to them and ignore the rest.

There is one further improvement we can make. On most pages of the site we know exactly which profile should be shown, but we still go through this dance of first wrapping our data into this generic Profile type, passing it to a generic view function and then immediately unwrapping it again in a case statement. We’re making the same decision twice!

Making the same decision twice

It turns out that the union type combining all the different profiles is unnecessary. By removing the union type, we can get rid of that extra layer of conditionals. It looks something like:

type LoggedOutProfile
    = LoggedOutProfile User


type LoggedInProfile
    = LoggedInProfile User


type OwnProfile
    = OwnProfile User


type NemesisProfile
    = NemesisProfile
        { user : User
        , showDownDate : Maybe DatePicker
        }



-- These views are used in the views of the different pages.


viewLoggedOutProfile : LoggedOutProfile -> Html msg
viewLoggedOutProfile profile =
    ...


viewLoggedInProfile : LoggedInProfile -> Html Msg
viewLoggedInProfile profile =
    ...


viewOwnProfile : OwnProfile -> Html Msg
viewOwnProfile profile =
    ...

The smaller types extracted from the sum type are actually more powerful on their own! When we use one of these narrower profile types in our functions we make clear to human and compiler alike that particular function is meant to be used for a particular profile only. Of course these top level functions can share a lot of the logic related to rendering common parts of the profile card.

At some point these models and views will end up a part of a single top level model and view, that describe the entire program, but there’s no benefit in rushing the process to this single model by wrapping similar looking things into union types or extensible records. The more code we can write using the smaller types the better, because it’s functions taking and returning these smaller types which are easier to understand, easier to reuse, and offer more type safety.


Jasper Woudenberg
@jasperwoudnberg
Engineer at NoRedInk

New! Curriculum page refresh

$
0
0

We’ve redesigned our Curriculum page to make browsing a breeze!

New Free Features

Updated Curriculum page

Our new Curriculum page is easier to search! Click on a Pathway to see objectives, topics, interactive tutorials, lessons and more!

Check out the full update here.

New! Preview features & student activity

$
0
0

Find long-form passages, tutorials, and student activity in seconds.

Updated Curriculum page

When creating assignments, teachers often want to preview the content first. Head to our updated Curriculum page, find the concept you’d like to teach, and check out a few key features:

Click on the footprints to see what interactive tutorial goes with your topic! 

Want to prompt students to evaluate and correct a 3-5 paragraph passage that covers content from an entire pathway? Preview our long-form passage quizzes. Just look for the purple icon.

Check out how to use these resources for whole-class modeling here.

Updated “Last Active” on “Manage Students” page

Ever wonder when your students last accessed NoRedInk? Was it during class? Rushing to get an assignment in at 11:59pm? Now you can check! Navigate to the “Manage classes” section, click on the “Students” tab, and see an hourly update of when students were last active.

New! Mastery Tab, Sentence Stems, and Mobile Sign-Up

$
0
0

We’ve updated a few features to give teachers what they’ve been asking for!

New Premium Features

Sentence Stems

Want students to give quality feedback during the Step 3 of the Writing Cycle?

Have them keep an eye out for these sentence steps to get them started:

This feature is available as part of NoRedInk’s Writing Cycle. Click here to learn more.

New Free Features

Updated Mastery Tracker

The Mastery tab provides teachers with an overall picture of her class’s current mastery!

This new update takes into account both work that has been assigned and work that has been done by students on their own!

Learn about the full update here.

Updated Mobile Signup

We’ve changed the interface students see when signup using mobile devices, so students can create accounts on the go!


Win a $1000 DonorsChoose.org gift card for collaborating with your colleagues

$
0
0

Building common assessments or final exams in NoRedInk? Have an assignment that really worked for your students? From now until December 15th, share assignments with other teachers for a chance to win!

How does it work?

  • STEP 1: Create or choose an assignment on NoRedInk to share with other teachers
  • STEP 2: Click the Share icon and select “Share with other teachers”
  • STEP 3: Copy the link and share! Anywhere works: Facebook, email, Pinterest, you name it!
  • From now until Dec. 15th: Each time your assignment is reused, you’ll be entered to win the $1000 DonorsChoose.org gift card!

No special signup required; we’ll track everything automatically! Questions? Let us know here.

FAQs

1. What is NoRedInk?

NoRedInk builds stronger writers through interest-based curriculum, adaptive exercises, and actionable data. We teach over 500 skills, covering composition, grammar, mechanics, usage, and style. Sign up for free today!

2. Why would I want to share an assignment?

When you share an assignment, other teachers can quickly assess their students on the same content you assigned. This is a great way to build common assessments across your department or to help a teacher new to NoRedInk get started.

3. What is DonorsChoose.org?

DonorsChoose.org helps connect educators with potential donors able to contribute to the purchase of classroom supplies or experiences. They state their mission as, “We make it easy for anyone to help a classroom in need, moving us closer to a nation where students in every community have the tools and experiences they need for a great education.”

With a $1000 DonorsChoose.org gift card, you’ll be able to create a project on DonorsChoose.org that NoRedInk will help fund. Learn more at https://www.donorschoose.org/

4. What happens when I share an assignment link?

Any teacher who clicks your link will be able to create an assignment that covers the same content you originally assigned. Teachers will be able to adjust and customize the assignment, but their starting point will match the assignment you created.

5. How can I track how many teachers have used my link?

Unfortunately, this isn’t possible at this time. NoRedInk will be tracking all share counts internally.

6. How will I know if I won?

Our team will email the winner after the competition concludes on December 15. We’ll also announce the winners in our blog!

New! Passage Preview in Assignment Form

$
0
0

New Free Features

Long-Form Passage Preview

Do you want to assign a passage quiz, but first you want to make sure it aligns with what students are learning?

When you go to select a passage to assign, you can now preview it on the assignment form before you assign it to students!

Click here to learn more about how to use passages in your instruction!

New! Updated Student Interests

$
0
0

New Student Interests

Re-energize students in the new year with our updated interests!

We listened to students’ top requests and made our content more engaging and relevant than ever!

Students can now practice with sentences featuring…

  • Hit musicians like Ed Sheeran and Solange
  • Popular superheroes like Supergirl and The Flash
  • The Broadway musical Hamilton
  • Coco, Pixar’s latest hit
  • And much more!

New! Additional State Alignment Filters & Student Invite

$
0
0

New Premium Features

State Alignment Filters

Want to know if content on NoRedInk aligns to your state’s standards? We’ve updated our filters to now include ACCRS (AZ), SCCCR (SC), and OLS (OH).

Keep an eye out to see if your state will be next!

Click here to learn more about filters and how to utilize them on the site.

New Free Features

Student Invitation Link

Teachers can now invite students to join their courses via a link!

Learn about the different ways to enroll students in your NoRedInk class here.

New! Class Activity Feature

$
0
0

New Free Features

Class Activity

Need a quick overview of what’s going on in each of your classes? Click “Show Activity” on a class to view late submissions, student data, and assignment activity.

Learn more about the Class Activity feature here.

Viewing all 193 articles
Browse latest View live