+ - 0:00:00
Notes for current slide
Notes for next slide

Have you ever completed the development of a data product with the feeling that you may have done a mistake or not have used the optimal way to clean or process your data?

As an analyst, if using “point & click” interface, “rewinding” all the steps at an advanced stage of the development of your product can be extremely painful and lengthy. Errare humanum est sed perseverare... If you want to learn from your mistake rather than to suffer from them, then analysis reproducibility is what you need...

In this session, we will introduce you to the basics of analysis reproducibility and explain you what elements you need to watch for when you kick start your analysis so that you can always rewind and improve any products you have already spent time on. We will also show you how you can learn from analysis done in a reproducible way done by other colleagues.

We will also show you through practical examples how to implement a fully reproducible data analysis workflow applied to a Household Survey dataset using R statistical Language: from initial data exploration to joint interpretation till the creation of data stories.

Last, we hope that this session will motivate you to join the vibrant R users community in UNHCR and soon become an R champion. In order to make the most of the session, we would advise you to install the following open source environment:

R - https://cran.r-project.org/bin/windows/base/
Rstudio Free version: https://www.rstudio.com/products/rstudio/download/
Create an account on Github - https://github.com/join? and install Github desktop https://desktop.github.com/

You may also start installing UNHCR Packages – following the instruction in their respective documentation published on Github:

Use UNHCR Open data - https://unhcr.github.io/unhcrdatapackage/docs/
API to connect to internal data source - https://unhcr-web.github.io/hcrdata/docs/
Perform High Frequency Check https://unhcr.github.io/HighFrequencyChecks/docs/
Process data crunching for survey dataset - https://unhcr.github.io/koboloadeR/docs/
Use UNHCR graphical template- https://unhcr-web.github.io/unhcRstyle/docs/

Last, you may also take advantage of going through one or more of the R learning content on Learn & Connect: Achieve your potential: UNHCR (csod.com) and see some practical tutorial on https://humanitarian-user-group.github.io/

The best way to start and learn is to have a concrete project! If you have one and need mentoring, we can liaise after the session.

Analysis Reproducibility

Why it matters and how to do it?

8 July 2021

1 / 23

Learning objectives

About today:

  • Understand what you can gain from analysis reproducibility.

  • Know what the main technical requirements are to set up for their analysis to be reproducible.

  • Have a demonstration of a practical way to make a cake using household survey data: crunching, analysis & interpretation & data stories!

Not today:

2 / 23

Have you ever completed the development of a data product with the feeling that you may have done a mistake or not have used the optimal way to clean or process your data?

As an analyst, if using “point & click” interface, “rewinding” all the steps at an advanced stage of the development of your product can be extremely painful and lengthy. Errare humanum est sed perseverare... If you want to learn from your mistake rather than to suffer from them, then analysis reproducibility is what you need...

In this session, we will introduce you to the basics of analysis reproducibility and explain you what elements you need to watch for when you kick start your analysis so that you can always rewind and improve any products you have already spent time on. We will also show you how you can learn from analysis done in a reproducible way done by other colleagues.

We will also show you through practical examples how to implement a fully reproducible data analysis workflow applied to a Household Survey dataset using R statistical Language: from initial data exploration to joint interpretation till the creation of data stories.

Last, we hope that this session will motivate you to join the vibrant R users community in UNHCR and soon become an R champion. In order to make the most of the session, we would advise you to install the following open source environment:

R - https://cran.r-project.org/bin/windows/base/
Rstudio Free version: https://www.rstudio.com/products/rstudio/download/
Create an account on Github - https://github.com/join? and install Github desktop https://desktop.github.com/

You may also start installing UNHCR Packages – following the instruction in their respective documentation published on Github:

Use UNHCR Open data - https://unhcr.github.io/unhcrdatapackage/docs/
API to connect to internal data source - https://unhcr-web.github.io/hcrdata/docs/
Perform High Frequency Check https://unhcr.github.io/HighFrequencyChecks/docs/
Process data crunching for survey dataset - https://unhcr.github.io/koboloadeR/docs/
Use UNHCR graphical template- https://unhcr-web.github.io/unhcRstyle/docs/

Last, you may also take advantage of going through one or more of the R learning content on Learn & Connect: Achieve your potential: UNHCR (csod.com) and see some practical tutorial on https://humanitarian-user-group.github.io/

The best way to start and learn is to have a concrete project! If you have one and need mentoring, we can liaise after the session.

A Vision for data analysis

"Multi-functional teams, with strengthened data literacy, regularly conduct meaningful and documented joint data interpretation sessions to define their strategic directions based on statistical evidences"

3 / 23

A Theory of Change for Data analysis

Proper user of data for advocacy & programmatic decision making

Corporate Standards exist to define how to encode & process household surveys dataset

Field data experts are trained based on precise recipes and predefined tools at each step of the data life cycle

Data are presented, discussed and linked to expert knowledge during data interpretation sessions with a multi-functional team

All potential valid interpretations, including diverging views, are systematically recorded

Persuasive "Data Stories“ and Policy papers are generated

4 / 23

Data Science is like cooking

When a chef is starting out with a new dish...

  • Hypothesis Tasting -- Setting the right questions

  • Ingredients = source the Data

  • Wash your food = clean your data

  • Flavor engineering = create calculated & derived variables

  • Taste and explore = reshape & visualize the data

  • Tune your oven = statistical modeling

  • Art of plating = use styled brand

  • Document your recipe = add technical comments

Eat the cake

5 / 23

https://towardsdatascience.com/data-science-explained-with-cooking-1a801731d749

https://towardsdatascience.com/5-reasons-why-data-science-is-like-cooking-daa506b4166a

Without good ingredients, you can’t cook a good dish. Most time and effort are spent on cleaning and preparing the ingredients.
Different tools and techniques are needed for different recipes. Cooking is both a science and an art.
You can’t become a great cook overnight.

Information Anxiety & Analysis paralysis

When people do not want to eat the cake...

Potential source of reluctance...

  • I do not know how to eat it: I see all those elements on it without being able to understand why they were added there and how this works...

  • I do not trust this cake: How was it created? Did you follow correctly the recipe? Were the ingredient fresh? Can I trust how you sourced the ingredient?

  • This is not the cake I need! It looks too heavy & too big: I will not be able to digest it...

  • I am not hungry and do not even know what cake I want...

Eat the cake

6 / 23

simple data visualization for decision making or complex patterns interpretation for knowledge building

Data Products: When What?

Dashboard are relevant for displaying KPIs! (like when you drive your car...)

Key Performance Indicators (KPIs) are indicators specificlly designed to show progress toward an intended result, i.e a predefined target

Create an analytical basis for decision making, aka Business Intelligence

Help focus attention of Snr Management on what matters most - a good dashbaord needs to be concise

CFA X Building a data product a Need to display Key Performance Indicators (KPI)? X->a Start b Basic Statistics? a->b No 1 Dashboard   with PowerBI a->1 Yes c Polished visuals with  brand style? b->c Yes d Satistical Modeling Machine Learning? b->d No e Quick processing is critical ? c->e No 2 Notebook with Rmd c->2 Yes d->e Yes d->2 No f Need for Interactivity ? e->f No 3 Application  with Python e->3 Yes f->2 No 4 Interface with R-Shiny f->4 Yes
7 / 23

Why we need to work in a reproducible way?

Ethics, Productivity, Learning

8 / 23

Ethics: Science is 'show me' - not 'trust me'

Reproducibility allows for peer review

Peer Review allows for transparency

Transparency allows for scrutinity

Scrutinity allows for accountability

It's okay to make mistakes, as long as one can detect them and that we can learn from them...

Eat the cake

9 / 23

Ethical principles into algorithmic design - would apply as well when designing household vulnerability scoring formula to inform humanitarian targeting - https://www.hum-dseg.org/sites/default/files/2020-10/Framework%20for%20the%20ethical%20use.pdf

Productivity: getting things done quickly and safely!

Automation through functions & scripts can help skipping repetitive tasks

Tasks that involve recurrent data manipulation are undertaken by teams... .. but not everyone in the team needs to be a geek/coder!

When enough investment can be made, Graphical User Interface (GUI) can be developed for specific functions to ease the learning curve of new users while they are still in the process of building up their personal R skills.

Automate

10 / 23

An R-Community geared towards learning

Which approach is the most appealing exercise among the 2 proposed aside?!!

Start from an end-product and reverse engineer it!

Eat the cake first! (then play with and change ingredients...)

Eat the cake

11 / 23

Conditions for reproducibility.

Sourcing data, documenting analysis, & packaging output

12 / 23

Preparing data

Data Wrangling takes usually more than 80% of any data project time...

Imagine if you need to rewind your analysis...

Correct at any steps in the process and re-run all..

Eat the cake

13 / 23

Documenting analysis

Eat the cake

14 / 23

Packaging functions

Gradual automation

  • level 1: write a command
  • level 2: organize multiple command together in re-usable function
  • Level 3: organize multiple functions together in a package
  • Level 4: includes test data & Documentation
  • Level 5: Unit testing, aka code review
  • Level 6: Graphical User Interface (GUI)

Eat the cake

15 / 23

Hands-on practice: a practical run-through based on Household survey dataset

Crunching, Interpretation & Dissemination

16 / 23

Step 1- Notebook for Automatic Data exploration, aka "crunching"

Eat the cake

17 / 23

A comparison of packages for Automated Exploratory Data Analysis https://arxiv.org/pdf/1904.02101.pdf https://github.com/mstaniak/autoEDA-resources

One of the most significant disadvantages of PowerBI is that it is read-only. As a user, you cannot use charts/tables to make decisions and save them in a database directly. Also, PowerBI doesn’t have an accessible source code. You can only edit fields in WYSIWYG mode, which makes PowerBI easy to start but difficult to maintain. Having no source code makes it nearly impossible to have proper version control, automatically test logic, or collaborate on large projects. https://appsilon.com/powerbi-vs-r-shiny/

Pros of Power BI - Cross-filtering

Pros of RStudio -

  • Visual editor for R Markdown documents
  • In-line code execution using blocks
  • Sophisticated statistical packages
  • Supports Rcpp, python and SQL
  • Can be themed
  • In-line graphing support
  • Latex support

What is RStudio? An integrated development environment for R, with a console, syntax-highlighting editor that supports direct code execution. Publish and distribute data products across your organization. One button deployment of Shiny applications, R Markdown reports, Jupyter Notebooks, and more. Collections of R functions, data, and compiled code in a well-defined format. You can expand the types of analyses you do by adding packages.

You can use R script with Power BI in several ways. The key ones include:

  • Data source – expand the range of available data sources, provide data processing logic at the import stage
  • Processing script – enrich data processing capabilities and engineering features
  • Visualization – extend the range of visualizations available in R and implement charts
  • As a part of Power BI visualization – gain the ability to build R-based visualizations and package them into ready-to-put boxes directly into Power BI. https://www.predicagroup.com/blog/visualizing-data-r-script-power-bi/

Step 2- Notebook for Data Insights documentation: Analysis Repo

Insight: The capacity to gain an accurate and deep understanding of someone or something

Not all charts will emulate need for interpretation - the data analyst need to gemerate the one that can create debates.

Charts need to be crafted - for instance use chart title framed as "opening question"...

Insights arive when a multifunctional team is able to explain unexpected patterns, to challenge or revise existing assumptions, or to identify evidence to support Call to action.

Eat the cake

18 / 23

Step 3- Notebook to communicate with data: Microsite

From assumptions to evidence based statement

Data is to support Narrative - not the other way around!

Leverage Art Data Storytelling to:

  • Explain,
  • Enlighten,
  • Engage

Eat the cake

19 / 23

See https://github.com/unhcr-americas/ageingonthemove/blob/main/README.md

https://distill.pub/2020/communicating-with-interactive-articles/#applications-tab Research Dissemination Conducting novel research requires deep understanding and expertise in a specific area. Once achieved, researchers continue contributing new knowledge for future researchers to use and build upon. Over time, this consistent addition of new knowledge can build up, contributing to what some have called research debt. Not everyone is an expert in every field, and it can be easy to lose perspective and forget the bigger picture. Yet research should be understood by many. Interactive articles can be used to distill the latest progress in various research fields and make their methods and results accessible and understandable to a broader audience. Opportunities

Engage and excite broader audience with latest research progress
Remove research debt, onboard new researchers
Make faster and clearer research progress

Challenges

No clear incentive structure for researchers
Little funding for bespoke research dissemination and communication
Not seen as a legitimate research contribution (e.g., to the field, or one's career)

Conclusion

20 / 23

R in Humanitarian Context

You are not alone

More than 450 users from multiple organisation in the humanitarian-useR-group

Around already ≈20 R champions within UNHCR vs more than 420 PowerBI Pro users

Try to start by using existing UNHCR packages and start from a project you can reproduce

Eat the cake

21 / 23

A call for Institutionalisation

Using Standard Multi Tier IT Standard Support model to enhance reproducible analysis...

  • Tier 4: Code Review & Quality Insurance / Contracted Company with global frame agreement

  • Tier 3: Internal package development / Internal R champions team (cost: one yearly Rdev meeting to incentivize contributing staff)

  • Tier 2: User induction & Advanced User Support / Global Data Service/DIMA (Data Science Team)

  • Tier 1: Basic User Troubleshooting / Global Service Desk  (WIPRO according to Documented Scenario)

  • Tier 0: Self-support / Package documentation (maintained and improved on continuous basis)

Eat the cake

22 / 23

Your Opinion Count

Please fill this survey to share your opinion and thoughts on the topic presented here

Slides are available in English & French - slides note can be displayed by pressing keyboard shortcut p - and in PDF (English & French) within this github repo

23 / 23

Learning objectives

About today:

  • Understand what you can gain from analysis reproducibility.

  • Know what the main technical requirements are to set up for their analysis to be reproducible.

  • Have a demonstration of a practical way to make a cake using household survey data: crunching, analysis & interpretation & data stories!

Not today:

2 / 23

Have you ever completed the development of a data product with the feeling that you may have done a mistake or not have used the optimal way to clean or process your data?

As an analyst, if using “point & click” interface, “rewinding” all the steps at an advanced stage of the development of your product can be extremely painful and lengthy. Errare humanum est sed perseverare... If you want to learn from your mistake rather than to suffer from them, then analysis reproducibility is what you need...

In this session, we will introduce you to the basics of analysis reproducibility and explain you what elements you need to watch for when you kick start your analysis so that you can always rewind and improve any products you have already spent time on. We will also show you how you can learn from analysis done in a reproducible way done by other colleagues.

We will also show you through practical examples how to implement a fully reproducible data analysis workflow applied to a Household Survey dataset using R statistical Language: from initial data exploration to joint interpretation till the creation of data stories.

Last, we hope that this session will motivate you to join the vibrant R users community in UNHCR and soon become an R champion. In order to make the most of the session, we would advise you to install the following open source environment:

R - https://cran.r-project.org/bin/windows/base/
Rstudio Free version: https://www.rstudio.com/products/rstudio/download/
Create an account on Github - https://github.com/join? and install Github desktop https://desktop.github.com/

You may also start installing UNHCR Packages – following the instruction in their respective documentation published on Github:

Use UNHCR Open data - https://unhcr.github.io/unhcrdatapackage/docs/
API to connect to internal data source - https://unhcr-web.github.io/hcrdata/docs/
Perform High Frequency Check https://unhcr.github.io/HighFrequencyChecks/docs/
Process data crunching for survey dataset - https://unhcr.github.io/koboloadeR/docs/
Use UNHCR graphical template- https://unhcr-web.github.io/unhcRstyle/docs/

Last, you may also take advantage of going through one or more of the R learning content on Learn & Connect: Achieve your potential: UNHCR (csod.com) and see some practical tutorial on https://humanitarian-user-group.github.io/

The best way to start and learn is to have a concrete project! If you have one and need mentoring, we can liaise after the session.

Paused

Help

Keyboard shortcuts

, , Pg Up, k Go to previous slide
, , Pg Dn, Space, j Go to next slide
Home Go to first slide
End Go to last slide
Number + Return Go to specific slide
b / m / f Toggle blackout / mirrored / fullscreen mode
c Clone slideshow
p Toggle presenter mode
t Restart the presentation timer
?, h Toggle this help
Esc Back to slideshow