Jenkins Covid Analytics Automation
This project describes a data acquisition and analytics pipeline based on Jenkins, Python and Gogs (a private Github repo). Jenkins, a CI/CD tool mostly used in DevOps, will automate the complete process. Python scripts will do the heavy lifting. When triggered by Jenkins the scripts will pull the data and generate the analytics. Gogs will act both as Private and Public repository. To showcase this functionality I created a COVID_Public and a COVID_Private repositories. The published data on the “Public” repository will trigger a chain of events which will generate some analytics.
Get Data | Analytics | Publish Data
As a personal goal I also wanted to showcase, or at least to simulate, a development workflow. In a development environment small groups work on specific application parts. Each small part gets integrated into the whole application. Several tests are performed and if the tests pass, the code is made available to the public.
You could say that the Webhook job simulates the integration part(CI part).
Analytics could be replaced with a (Test job).
This is the first environment where software is deployed after being integrated. This environment is constantly changing as new code is contributed, and may be in a non-functional state at any given time.
Developers use this environment to conduct basic functional testing, sometimes referred to as “smoke testing.”
Test | QA | UA
Test environments are commonly used for integration, performance, and functional testing. Code gets tested for essential functionality. In the UA phase customer expectations get validated. Customer requirements get tested.
It the public repository for project releases. It’s where the customer goes to get the latest versions.
Jenkins has been configured to “listen” for webhook requests. Each time a commit is pushed to the master branch of the Public_Repo, Gogs will send the webhook trigger to Jenkins.
The public repo will host a “template markdown” with placeholders/filling for the analytics. The analytics script will generate images and when the pipeline finishes its execution, it will fill in that placeholders. As an end result you will have an up to date statistics from COVID-19 data.
# Romania_COVID_Analytics(Under construction)
Representations of COVID data for Romania
### General stats
<img align="Center" src="Images/general_stats.png" width=1000>
### Time Series
The graph show the current **COVID-19** evolution:
<img align="Center" src="Images/covid_timeseries.png" width=1000>
### COVID growing rates
<img align="Center" src="Images/covid_trends.png" width=1000>
## Total cases by county (left) | Total dead by county (right)
<img src="Images/total_county.png" height=350 width="450"/> <img src="Images/total_dead.png" height=350 width=450/>
### Numbers by county
<img align="Center" src="Images/county_numbers.png" width=1000/>
The rise of Corona-19 generated a lot of community initiatives to help fight pandemic.
On the analytics side, John Hopkins University did an admirable job. But there where also other good projects.
See the links below:
Within this project I show how we can use Jenkins to:
- pull data-sets from public sources
- generate the analytics
- publish the analytics
The workflow can be seen as a chained sequence of steps (build steps). Each successful build step triggers the next build.
- Data is available to the public.
- A Jenkins job listens for new data, and based on a specific action (webhook trigger) it starts a build job.
- A Python script generates the analytics.
- Analytics results are pushed to the public
Architecturally, Jenkins is fairly simple. Users of Jenkins create and maintain jobs, or projects. A project is a collection of steps, also called builds. The term “Build” comes from the Jenkins heritage as a build automation system. “Building” software typically refers to the process of compilation, in which high-level, human-written code is translated to machine code.
Jenkins organizes each project into its home directory workspace:
│ ├── Images
│ └── README.md
│ ├── Images
│ ├── README.md
│ ├── datasets
│ │ ├── getCasesByCounty.json
│ │ ├── getDailyCaseReport.json
│ │ ├── getDeadCasesByCounty.json
│ │ ├── getHealthCasesByCounty.json
│ │ └── romania-counties.json
Analytics end-to-end pipeline
The first job listens for new data. This is done through a webhook trigger. When new data is available a notification is sent to Jenkins. Following this event the dataset will be downloaded locally. The trigger is based on committed data to a public repo, but this functionality must be configured.
Jenkins setup (Build job setup)
- Click add new item
- Choose freestyle project (name it/give the project a name)
- In (Source Code Management) choose Git:
- Repository URL (add http repository url):
- Choose gogs credentials
4. Build trigger:
Gogs webhook setup:
- Click Settings; Webhook
- Setup Payload URL
- Check (Webhook based on push event)
- Activate it and test delivery
- Create a new job (see previous Jenkins steps 1 and 2)
- In build section check [execute shell script]:
echo 'conda activate project_env'
The first line tells Jenkins where to look for the python version specific to our project.
The second line activates the bash shell. Jenkins does not use bash, and to avoid unwanted behavior it’s best to activate it.
The third line activates this particular python environment. Conda/Anaconda gives the user the opportunity to create “virtual python deployments”. These “separated python mediums” can have different libraries.
Home specifies the location of our code. The last line executes the script.
read_data function loads the data-sets into pandas dataframe objects.
add_geodata merges geographical coordinates.
get_statistics calculates the mean/standard deviation /minimum & maximum values. fit_fit_4fbprohpet makes the necessary transformations on the data. The forecasting function requires a two column dataframe. The ds column represents the day number, and y represents the actual count.
fit_4timeseries transforms time series data into lists. This manipulation is necessary for the time series plotting function. The series graphs (confirmed;recovered;dead numbers).
plot_map draws on Romania map the counties. The color intensity of each county matches the number of cases.
scatter_plot draws time series data.
forecast_model does a 22 day forecast.
The “geometry” column in our dataset contains Polygon county boundries, each corresponding to a different region in Romania.
There are different geometry type and the most common are: Point, LineString, or Polygon.
Similar to the previous build job, setup the Github repo url and choose the apropriate credentials. However this time, instead of the private url, I will use the public one:
Build step | Shell execution
The first line activates the bash shell.
The second one copies the Images from the analytics directory to the current directory.
The third one add/commits and pushes the code to the public repository.
Originally published at https://mpruna.github.io.