# Continuous Machine Learning

Reading Time: 11 minutes

## What is it?

Continuous Machine Learning (CML) follows the same concept of Continuous Integration and Continuous Delivery (CI/CD), famous concepts in Software Engineering / DevOps, but applied to Machine Learning and Data Science projects.

## What is this post about?

I will cover a set of tools that can make your life as a Data Scientist much more interesting. We will use MIIC, a network inference algorithm, to infer the network of a famous dataset (alarm from bnlearn). We will then use (1) git to track our code, (2) DVC to track our dataset, outputs and pipeline, (3) we will use GitHub as a git remote and (4) Google Drive as a DVC remote. I’ve written a tutorial on managing Data Science projects with DVC, so if you’re interested on it open a tab here to check it later.

The first thing is that I don’t really like having to go to the GitHub website all the time, so I will also introduce you to gh, GitHub’s official command line application. We will also use CML, an open-source library for implementing continuous integration & delivery (CI/CD) in machine learning projects, that will link git, DVC and GitHub Actions. The idea is that every time you do something in your repository, some actions will be triggered and executed by GitHub Actions in their computing infrastructure. One example would be using branches as experiments in your ML project, such as several inferences of the same algorithm but changing some parameters. Every time you commit changing a parameter and push, a report would be presented to make it easier (and beautiful) for you to compare the results with the different parameters.

## Time to start.

Let’s create our repository on GitHub and make a local copy of it. From the command line! (Instructions here to install gh).

retained_edges <- nrow(res$all.edges.summary[res$all.edges.summary$type == 'P', ]) ratio_edges <- paste0('Ratio of retained edges: ', retained_edges/total_edges) write.table(ratio_edges, file = 'metrics.txt', col.names = FALSE, row.names = FALSE)  This code loads the miic R package, reads the dataset into the R environment, runs miic to infer the network and calculates the ratio of retained edges by the number of possible edges. Then, the ratio is saved to a file named metrics.txt. ## GitHub Actions Now it’s time to start playing with GitHub Actions to make CML work for us. Every time we push a new commit to the repository, the model will be rebuilt and our metrics recalculated. In order to use GitHub Actions, we need to create a special file in a special folder. The path from within your git repository is: .github/workflows Inside the folder, you have to create your GitHub Action file. The name is not important, but it must be a file in YAML format. Let’s create a file named cml.yaml inside the path mentioned above. mkdir -p .github/workflows cd .github/workflows Then, create a file named cml.yaml and put the code below inside it. This asks for a machine running the latest version of Ubuntu, sets up an R environment, checks out the current git repository, installs MIIC, DVC, their dependencies, dvc pull our dataset, calls the infer_network.R script that will save the metrics to a file in the end, and then output it. name: dvc-cml-miic on: [push] jobs: run: runs-on: [ubuntu-latest] steps: - uses: r-lib/actions/setup-r@master with: version: '3.6.1' - uses: actions/checkout@v2 - name: cml_run env: repo_token:${{ secrets.GITHUB_TOKEN }}
GDRIVE_CREDENTIALS_DATA: ${{ secrets.GDRIVE_CREDENTIALS_DATA }} run: | # Install miic and dependencies wget -c https://github.com/miicTeam/miic_R_package/archive/v1.4.2.tar.gz tar -xvzf v1.4.2.tar.gz cd miic_R_package-1.4.2 R --silent -e "install.packages(c(\"igraph\", \"ppcor\", \"scales\", \"Rcpp\"))" R CMD INSTALL . --preclean cd .. # Install Python packages pip install --upgrade pip pip install wheel pip install PyDrive2==1.6.0 --use-feature=2020-resolver # Install DVC wget -c https://github.com/iterative/dvc/releases/download/1.4.0/dvc_1.4.0_amd64.deb sudo apt install ./dvc_1.4.0_amd64.deb # Run DVC dvc pull Rscript infer_network.R # Write your CML report echo "MODEL METRICS" cat metrics.txt Instead of comitting this to the master (default) branch, we will create an experiment branch. That’s how you should use DVC! We will analyze the raw version of the alarm dataset, no pre-processing, so I will call this branch raw_alarm_dataset. You have used dvc pull already, so you authenticated your machine with Google Drive. Create a GitHub secret with the content of the file .dvc/tmp/gdrive-user-credentials.json and name it GDRIVE_CREDENTIALS_DATA. git checkout -b raw_alarm_dataset # infer_network.R is not in this folder, therefore git add . wouldn't # add it to the index of your git repository. -A adds everything. git add -A git commit -m 'Infers alarm network with MIIC and default parameters' git push origin raw_alarm_dataset gh pr create --title 'Network inference of alarm dataset' Now, go to GitHub and check what’s happening. If everything goes according to plan, you will see something like the image below when the check is over. Well… You got your metrics printed out in the checks log file. Cool, but you probably agree with me that we should expect something more elegant, right? Hehe ^^ Let’s add some lines to our infer_network.R script to make it plot the network, and then let’s change the last part to make use of CML functionalities. The new infer_network.R should look like: library(miic) alarm_dataset <- read.table('data/alarm.tsv', header = TRUE) res <- miic(input_data = alarm_dataset) total_edges <- nrow(res$all.edges.summary)
retained_edges <- nrow(res$all.edges.summary[res$all.edges.summary$type == 'P', ]) ratio_edges <- paste0('Ratio of retained edges: ', retained_edges/total_edges) write.table(ratio_edges, file = 'metrics.txt', col.names = FALSE, row.names = FALSE) # Plot network png(file='network_diagram.png') miic.plot(res) dev.off() And the new cml.yaml file should look like the code below. The new thing now is that we’re also installing CML and making use of it. name: dvc-cml-miic on: [push] jobs: run: runs-on: [ubuntu-latest] steps: - uses: r-lib/actions/setup-r@master with: version: '3.6.1' - uses: actions/checkout@v2 - name: cml_run env: repo_token:${{ secrets.GITHUB_TOKEN }}
GDRIVE_CREDENTIALS_DATA: ${{ secrets.GDRIVE_CREDENTIALS_DATA }} run: | # Install miic and dependencies wget -c https://github.com/miicTeam/miic_R_package/archive/v1.4.2.tar.gz tar -xvzf v1.4.2.tar.gz cd miic_R_package-1.4.2 R --silent -e "install.packages(c(\"igraph\", \"ppcor\", \"scales\", \"Rcpp\"))" R CMD INSTALL . --preclean cd .. # Install Python packages pip install --upgrade pip pip install wheel pip install PyDrive2==1.6.0 --use-feature=2020-resolver # Install DVC wget -c https://github.com/iterative/dvc/releases/download/1.4.0/dvc_1.4.0_amd64.deb sudo apt install ./dvc_1.4.0_amd64.deb # Run DVC dvc pull Rscript infer_network.R # Install CML npm init --yes npm i @dvcorg/cml@latest # Write your CML report echo "## Model Metrics" > report.md cat metrics.txt >> report.md echo "## Data visualization" >> report.md npx cml-publish network_diagram.png --md >> report.md npx cml-send-comment report.md Let’s commit. git add . git commit -m 'Uses CML to improve PR feedback' git push origin raw_alarm_dataset  Now, right after the checks are done, you should have an automatic comment with your report like in the figure below. Let’s say that I think too many edges have been removed and maybe the network is not consistent. I will change the infer_network.R script to make MIIC look for a consistent network. The third line now looks like: res <- miic(input_data = alarm_dataset, consistent='orientation') git add . git commit -m 'Makes network consistent' git push origin raw_alarm_dataset So now I think it’s right and I should approve the pull request 🙂 . I could do it clicking on the green “Merge pull request” button or I could use gh again, GitHub’s official command line application. gh pr merge 1 It will ask you two questions. I chose to create a merge commit and to not remove the branch, be it locally or at GitHub. To go back to the master branch, you should do: git checkout master ## Using Docker containers You probably noticed it takes a while to do the checks and depending on how many things you want to install, it can take very long. One way out of this situation is by using a docker container that already has your dependencies installed. The way we’ve been doing it so far is ready for you to use your containers, after all, I’m installing CML manually. If you don’t want to use a container of yours, but don’t want either to download and install CML at every check, you can use CML’s official docker container. Since we merged a pull request, our remote (GitHub) is different from our local repository. To update our local repository, let’s run git pull, and then create a new branch. git pull git checkout -b cml_container Change your cml.yaml to the code below. name: dvc-cml-miic on: [push] jobs: run: runs-on: [ubuntu-latest] container: docker://dvcorg/cml steps: - uses: actions/checkout@v2 - uses: r-lib/actions/setup-r@master with: version: '3.6.1' - name: cml_run env: repo_token:${{ secrets.GITHUB_TOKEN }}
GDRIVE_CREDENTIALS_DATA: \${{ secrets.GDRIVE_CREDENTIALS_DATA }}
run: |
# Install miic and dependencies
wget -c https://github.com/miicTeam/miic_R_package/archive/v1.4.2.tar.gz
tar -xvzf v1.4.2.tar.gz
cd miic_R_package-1.4.2
R --silent -e "install.packages(c(\"igraph\", \"ppcor\", \"scales\", \"Rcpp\"))"
R CMD INSTALL . --preclean
cd ..

# Run DVC
dvc pull
Rscript infer_network.R

# Write your CML report
echo "## Model Metrics" > report.md
cat metrics.txt >> report.md
echo "## Data visualization" >> report.md
cml-publish network_diagram.png --md >> report.md
cml-send-comment report.md

Let’s add the changed file, commit it, push and create a Pull Request (PR).

git add .
git commit -m 'Makes use of CML container'
git push origin cml_container
gh pr create --title 'Use CML container'


Everything should have run fine, like in here. You can merge the pull request and then git pull to update your local copy.

gh pr merge 2
git pull

## What else?

DVC is not limited to data tracking. We could also track our pipeline, including output files such as the images that our infer_network.R script plotted. Imagine that we could have some code for preprocessing that would deliver a preprocessed dataset to the infer_network.R script that would generate the image with the network. Instead of running all these scripts (and we can easily think of scenarios that are much more complicated), we can use dvc to create a pipeline and a simple command (dvc repro) in our GitHub action file would be enough to reproduce our whole pipeline.

Besides, instead of installing a bunch of the same things (R, DVC, CML…) every time we push to the repository, we could have a Docker container with these things already installed. This could save us some time :-). In our case here, for example, downloading, compiling and installing MIIC takes a few minutes that could be spared if it was already installed in a Docker container. For our simple example, the time to download/setup the docker container may not make it worth to use it, but when complexity and dependencies increase, the benefits become more evident.

That’s it for today folks! 😉

You would not be reading this post if it wasn’t for Elle O’Brien, who told me so many things about CML + presentations and examples, and David Ortega who helped me setting up the R environment within the CML docker container.

# Best links of the week #63

Reading Time: 2 minutes

Continue…

# Mobility and COVID-19 cases. Did Brazil stop?

Reading Time: 9 minutes

You have probably heard that Google has released a set of mobility reports recently. The site hosting these reports, the so-called COVID-19 Community Mobility Reports, begins with the following sentence: “See how your community is moving differently due to COVID19”.

### What is it about?

Google offers a Location History feature in its services/systems that monitors the location, and consequently the displacement, of users. This data can be accessed and disabled at any time by users. According to Google, this feature needs to be activated voluntarily, as it is disabled by default. Based on this information, they observed how and where these individuals used to go in a period prior to the COVID-19 outbreak and how and where they are moving now, during the outbreak. There is a clear bias here. People who do not have a cell phone or tablet, or who have not activated this feature, are out of their sampling and this can impact the conclusions of the report. Still, it’s worth a look.

Continue…

# Manage your Data Science Project in R

Reading Time: 9 minutes

A simple project tutorial with R/RMarkdown, Packrat, Git, and DVC.

### The pain of managing a Data Science project

Something has been bothering me for a while: Reproducibility and data tracking in data science projects. I have read about some technologies but had never really tried any of them out until recently when I couldn’t stand this feeling of losing track of my analyses anymore. At some point, I decided to give DVC a try after some friends, mostly Flávio Clésio, suggested it to me. In this post, I will talk about Git, DVC, R, RMarkdown and Packrat, everything I think you may need to manage your Data Science project, but the focus is definitely on DVC.

Continue…

# Spurious Independence: is it real?

Reading Time: 14 minutes

### First things first: Spurious Dependence

Depending on your background, you have already heard of spurious dependence in a way or another. It goes by the names of spurious association, spurious dependence, the famous quote “correlation does not imply causation” and also other versions based on the same idea that you can not say that $X$ necessarily causes $Y$ (or vice versa) solely because $X$ and $Y$ are associated, that is, because they tend to occur together. Even if one of the events always happens before the other, let’s say $X$ preceding $Y$, still, you can not say that $X$ causes $Y$. There is a statistical test very famous in economics known as Granger causality.

The Granger causality test is a statistical hypothesis test for determining whether one time series is useful in forecasting another, first proposed in 1969.[1] Ordinarily, regressions reflect “mere” correlations, but Clive Granger argued that causality in economics could be tested for by measuring the ability to predict the future values of a time series using prior values of another time series. Since the question of “true causality” is deeply philosophical, and because of the post hoc ergo propter hoc fallacy of assuming that one thing preceding another can be used as a proof of causation, econometricians assert that the Granger test finds only “predictive causality”.

Granger Causality at Wikipedia.

The post hoc ergo propter hoc fallacy is also known as “after this, therefore because of this”. It’s pretty clear today that Granger causality is not an adequate tool to infer causal relationships and this is one of the reasons that when $X$ and $Y$ are tested by the granger causality test, and an association is found, it’s said that $X$ Granger-causes $Y$ instead of saying that $X$ causes $Y$. Maybe it’s not clear to you why the association between two variables and the notion that one always precedes the other is not enough to say that one is causing the other. One explanation for a hypothetical situation, for example, would be a third lurking variable $C$, also known as a confounder, that causes both events, a phenomenon known as confounding. By ignoring the existence of $C$ (which in some contexts happens by design and is a strong assumption called unconfoundedness), you fail to realize that the events $X$ and $Y$ are actually independent when taking into consideration this third variable $C$, the confounder. Since you ignored it, they seem dependent, associated. A very famous and straight forward example is the positive correlation between (a) ice cream sales and death by drowning or (b) ice cream sales and homicide rate.

Continue…

# How can I evaluate my model? Part I.

Reading Time: 8 minutes

One way to evaluate your model is in terms of error types. Let’s consider a scenario where you live in a city where it rains every once in a while. If you guessed that it would rain this morning, but it did not, your guess was a false positive, sometimes abbreviated as FP. If you said it would not rain, but it did, then you had a false negative (FN). Raining when you do not have an umbrella may be annoying, but life is not always that bad. You could have predicted that it would rain and it did (true positive, TP) or predicted that it would not rain and it did not (true negative, TN). In this example, it’s easy to see that in some contexts one error may be worse than the other and this will vary according to the problem. Bringing an umbrella with you in a day with no rain is not as bad as not bringing an umbrella on a rainy day, right?

Continue…

# Best links of the week #20

Reading Time: < 1 minute

Continue…

# Web Scraping, visualização de dados com R e os decretos do Bolsonaro

Reading Time: 13 minutes

Como o atual presidente do Brasil se compara em termos de número de decretos com seus predecessores?

Continue…

# Best links of the week #15

Reading Time: 2 minutes

### Best links of the week from 15th April to 21st April

1. When it comes to clustering, depending on the algorithm used, one may have a hard time determining the appropriate k (number of clusters). Some algorithms do not require it, but for the ones that do, such as k-means, you should have a look at the elbow method to evaluate the appropriate k or at the silhouette of objects regarding the clusters.
2. Dunder Data is a professional training company dedicated to teaching data science and machine learning. There is paid and free online material.
3. Software Carpentry, teaching basic lab skills for research computing.
4. ROpenSci, transforming science through open data and software.
5. mlmaisleve, conceitos rápidos e leves sobre Machine Learning ?.
6. kite, Code Faster in Python with Line-of-Code Completions.
Continue…

# The unintended trap in bracket subsetting in R

Reading Time: 3 minutes
The silent [and maybe mortal?] trap in bracket subsetting.

It should be clear to you that, as several other programming languages, R provides different ways to tackle the same problem. One common problem in data analysis is to subset your data frame and, as Google can show you, there are several blog posts and articles trying to teach you different ways to subset your data frame in R. Let’s do a quick review here:

Before starting to subset a data frame, we must first create one. I will create a data frame of patients named var_example with two columns, one for vital status (is_alive) and one for birth year (birthyear). Birth year values are 4-digit numbers representing the year of birth. The is_alive column can have one of three values:

• TRUE: The person is alive;
• FALSE: The person is dead;
• NA: We do not know if this person is either alive or dead.
> var_example <- cbind(as.data.frame(sample(c(NA, TRUE, FALSE),
size=100,
replace=TRUE,
prob = c(0.1, 0.5, 0.4))),
as.data.frame(sample(c(1980:1995),
size=100,
replace=TRUE)))
> colnames(var_example) <- c("is_alive", "birthyear")`
Continue…