How to make a good data-driven web app

May 25, 2016

Developing a successful app or project is no easy task; there are always more moving parts than you’d expect. Even beyond the technical pieces, there are the ever-important elements of getting the word out, making your app easy to use, and making sure it’s solving the right problem in the first place.

We recently hosted an evening event of lightning talks from five up-and-coming Chicago area projects, who have all faced these challenges, and more. All of these projects were Knight Foundation award recipients, and all seek to engage communities and improve the way data journalism is done and delivered. Here they are, along with excerpts from their talks about how they developed their projects successfully.

Citizens Police Data Project

The Citizens Police Data Project (by The Invisible Institute and The Experimental Station) is a cross-disciplinary enterprise dedicated to bringing more transparency and accountability to Chicago’s police force. Three of the project’s leads, Rajiv Sinclair, Chaclyn Hunt, and Harry Backlund, told the story of how a group of lawyers, researchers, journalists, and do-gooders made a FOIA request for data regarding complaints against Chicago police officers, whether or not these complaints have lead to disciplinary actions. Surprise, Chicago said yes! But they also only released (purposefully crooked?) photocopies of internal documents that were hard to read and search. Citizens Police Data Project took all this amassed information and created a central repository for exploring, analyzing, and aggregating the full disciplinary history of police officers on the macro and micro level.

Rajiv, Chaclyn, and Harry’s advice: Never lose sight of who you’re working to help and why.

Data from the document dump is now available in a searchable, filterable, and mappable way to expose systemic patterns and individual offenders usually obscured from the public due to a lack of disciplinary action. Also, each original source document is accessible at the click of a button so that users never lose sight of those affected by the actions and offenses described in these documents. Having garnered a lot of attention, this project is sure to evolve, so stay tuned.

screenshot of Citizens Police Data Project

Documents Empowerment Project

The Documents Empowerment Project (by mRelief) is a platform that helps people get access to public benefits, like food stamps. It’s focused on helping an underserved population by providing rapid evaluation of a user’s eligibility for public benefit programs, actionable next steps for both eligible and ineligible persons, and guidance for amassing all necessary documentation. The tangible result of these efforts is food in the fridges of Chicago’s hungry families. Project lead Rose Afriyie spoke passionately about how they’ve managed to reach so many throughout Chicago and how they are actively striving to increase their impact.

Rose’s advice: mRelief’s success is rooted in its strong emphasis on user experience (UX) for a population whose experience is rarely considered when developing technology and app/web services. They’ve been actively experimenting with different marketing strategies with the goal of expanding reach to and impact on the community. Also, this badass organization is using their next round of Knight Foundation funding to expand. They’re currently hiring full stack ruby developers! If you’re awesome, or you know someone who’s awesome, reach out.

screenshot of mRelief's Food stamps, or Supplemental Nutrition Assistance Program (SNAP), Page


Datascope’s own co-founder, Dean Malmgren, spoke about Scrubadub, a tool to remove personally identifiable information from raw text. So for research or other privacy purposes, for example, a message reading Hey Dean, how do you like them apples? 555-1194. becomes something like Hey __, how do you like them apples? __.

Development originally started as a one-off project but soon enough multiple groups reached out for help with cleaning their data in strikingly similar ways and it quickly became clear that this kind of tool could help a lot of people more easily and ethically analyze their unstructured text across industries. The next step was obvious, build it out! Now a live demo is up and running.

Dean’s advice: Sometimes the most valuable thing to do is to partner with the right groups to keep up a project’s momentum, as Scrubadub has done with Northwestern’s NUvention incubator. They’re currently continuing to lead efforts for prototyping a business model around the tool.

Interviewing possible users directly helped the direction of the project tremendously. Scrubadub’s use case was originally for academic research, but now it’s in development for e-discovery and legal services, FOIA officers and Institutional Review Boards (IRB). Next steps include building in machine learning algorithms and designing features tailored to these different audiences.

scrubadub concept illustration


Nineteen (by IIT Institute of Design) is an online tool for visualizing qualitative data in quantitative ways. As explained by project lead, Kim Erwin and collaborator Ted Pollari, users can quickly and easily iterate to glean the most out of large datasets without huge investments of time. Based on the most beautiful excel sheets ever seen—the painstaking work of the truly dedicated (and possibly mad) researcher—the tool is both informative and fun to use. Here’s hoping more tools make our jobs easier in such an enjoyable way!

Kim’s advice: Pick one thing, do it well and do it right. Many of the nineteen’s features are there to maximize the value for qualitative researchers. For instance, all data is local and nothing is uploaded to a server providing researchers with guaranteed protection of their data and ensuring the privacy of their subjects. Nevertheless, users can save a full HTML version of Nineteen with a dataset embedded in it—with variables preset to show a specific visualization— and email this to others.

screenshot of Nineteen in action


YourNextRepresentative (by DataMade) is a crowdsourcing website that seeks to increase electoral transparency. Project lead, Derek Eder, spoke about the lessons they learned along the way to developing this site, many of which are relevant to just about anyone dealing with data collection, aggregation, consolidation and analysis.

Derek’s advice: First of all, make it fun! Gamification can be a great way to engage the public in crowdsourcing. Also, since others have often already tried/are trying to accomplish the same thing you are, look for what else is out there and consolidate if your data is sparse. Unfortunately, finding out who these others are can be one of the biggest challenges to unifying data. Another (even bigger) challenge is that these various sources are all employing different methods of structuring, storing, and aggregating their own data so consolidation is very difficult. DataMade’s next goal is systematizing this aggregation and providing access to a consolidated view of electoral data using a new tool they’ve developed for matching and linking data called

screenshot of Your Next Representative


Needless to say, Chicago has a lot of awesome people and organizations in the space of technology, data, journalism, and civics. We commend these talented folks (and many others) for tackling tough problems in novel ways and we’re eager to see what else they’ll come up with in the future. If you’d like to bring our attention to other Chicago based groups doing exceptional work, feel free to give a shout out (with links!).

contributors to this post

headshot of Jess Freaner
headshot of Bo Peng