I’m describing the steps required to manually get Jenkins to: ⓵ clone a Python project GitHub repository, ⓶ create a virtual environment, ⓷ run editable install, and finally, ⓸ run Pytest for all tests.
Manual, in the context of this post, means we have
to click something on the Jenkins UI to get a build going.
I’ve installed Jenkins
2.388 LTS on my Ubuntu 22.10. The installation process is pretty straight forward, I’m
including the installation description in the last part of this post.
Once successfully installed and running, we can access Jenkins front-end at
http://<machine-ip-address><machine-name>:8080.
In my case, from
Windows 10, http://hp-pavilion-15:8080/, or on Ubuntu 22.10 where
Jenkins is installed, http://localhost:8080.
As usual, I had failed several times before getting my first project going. I am
documenting what I’ve learned in this post, so it is by no means a tutorial. I’m planning
to write more on it as I go along, so that is why CI/CD #01 is in the title.
I’m going back to this
https://github.com/behai-nguyen/app-demo.git
repo for this post. We’ll set up a Jenkins project to carry out the four (4)
steps outlined in the introduction. Let’s get to it.
❶ Log into Jenkins using the admin user name and password created as part
of the installation. Click on Dashboard located on top left
hand side corner; then click on + New Item underneath.
❷ On the next page:
Enter an item name: app_demo — this name matches the directory
name of the project. (In hindsight, the name of the repo should’ve been app_demo
instead of app-demo!)
Then select the first option Freestyle project.
Finally, click on the OK button to move to the Configure page.
❸ On the Configure page:
Under General, enter something meaningful for
Description, e.g.: “Try Jenkins on app-demo repo.”
Under Source Code Management:
Select Git. Then,
For Repository URL, enter
https://github.com/behai-nguyen/app-demo.git. Since this
is a public repo, anybody can clone it, we don’t need any credential
to access it.
For Branch Specifier (blank for ‘any’), enter
*/main — since we are interested in the main branch of
this repo.
Scrolling down, under Build Steps, drop down
Add build step, then select Execute shell.
Enter the following content:
PYENV_HOME=$WORKSPACE/venv
# Delete previously built virtualenv
if [ -d $PYENV_HOME ]; then
rm -rf $PYENV_HOME
fi
# Create virtualenv and install necessary packages
virtualenv $PYENV_HOME
. $PYENV_HOME/bin/activate
$PYENV_HOME/bin/pip install -e .
$PYENV_HOME/bin/pytest
This script starts off by deleting the existing virtual environment; create
a new one; activate it; then editable install all required packages; finally,
run Pytest.
Note, because of how the script works, it can use a lot of data depending on
how many packages are in the project.
Finally, click on the Save button to move to the project
page. The breadcrumb on the top left hand corner should now show
Dashboard > app_demo >.
❹ Underneath Dashboard, the fourth (4th) item is
▷ Build Now. Click on ▷ Build Now to build!
With a bit of luck 😂, it should “build” successful, the screen should
look like:
Underneath Build History, there is a green tick
preceding #1, which indicates that
this build has been successful. In case of a failure, it’s
a red x. In either case, clicking on #1,
will go to the build detail screen, then on the left hand side,
clicking on Console Output will show the full log
of the build — which is very informatively rich. On the failure
ones, I have been able to use the information to get rid of the
problems.
❺ Let’s look at what happens on disk. Jenkins’ work directory is
/var/lib/jenkins/workspace/:
app_demo can be seen on top of the list, it has been created by
Jenkins during the build. It is a ready to use Python development environment.
Let’s go to it, and activate the virtual environment:
$ cd /var/lib/jenkins/workspace/app_demo/
$ source venv/bin/activate
The virtual environment was activated successfully:
Let’s run Pytest. All tests should just pass:
$ venv/bin/pytest
All tests passed as should be the case. Deactivate the virtual environment with:
But it still failed to start. I had forgotten that I have set
port 8080 for Apache2. I free up the port 8080
and start Jenkins with:
$ systemctl start jenkins.service
And it starts with no problem. I am guessing the previous two installations were also
successful. But it does not matter now.
After successfully installed, we need to do some initial configurations.
I just followed the instructions in this DigitalOcean article
How To Install Jenkins on Ubuntu 22.04.
We can access Jenkins front-end at
http://<machine-ip-address><machine-name>:8080. In my case, from
Windows 10, http://hp-pavilion-15:8080/, or on Ubuntu 22.10 where Jenkins is installed,
http://localhost:8080.
✿✿✿
I like Jenkins thus far, it makes sense. I have worked in an environment
where the build server and the unit test server are two VMware machines.
The build process is written using Windows Powershell script, it also gets
source codes from a source code management software installed in-house.
Jenkins offers same capability, but the process seems much simpler. I hope
you find the information useful. Thank you for reading and stay safe as always.
Setting the value of a class attribute via the class will propagate the
new value to class instances, whom have not overridden the value of this
class attribute. This is in conformance with the documentation above.
Setting the value of a class attribute via the class will propagate the
new value down to child classes, but no vice versa.
Let’s explore these points via examples.
❶ Attribute lookup prioritises the instance.
This is an example from the documentation page quoted above, I have tweaked it a tiny bit.
class Warehouse:
purpose = 'Storage'
region = 'west'
We just instantiate an instance of Warehouse, then override the
value of the region attribute with 'east'(*):
2: Storage east
— (*): please note, what I’ve just written
above might not be correct… According to the quoted documentation,
the statement w2.region = 'east' might actually mean assigning
the new attribute region to instance w2, rather
than override as I’ve written.
❷ Setting the value via class propagates the new value to instances whom
have not provided their own value.
w1 has not set its own value for the region attribute,
setting the new value via the class Warehouse does propagate
back to instance w1. w2, on the hand, has set its
own, so it was not affected.
❸ Setting the value propagates from the parent class to child classes, but
not vice versa.
Consider the following classes:
class Engine:
started = False;
class TwoStrokeEngine(Engine):
pass
class FourStrokeEngine(Engine):
pass
In their initial state, started is False
for all classes:
Discussing a basic set up process to use the PostgreSQL Official Docker image on Windows 10 Pro, and Ubuntu 22.10 kinetic running on an older HP laptop. Then backup a PostgreSQL database on Windows 10 Pro machine, and restore this backup database to the newly set up Docker PostgreSQL Server 15.1 on the Ubuntu 22.10 machine.
This is the full documentation for these images.
Please note, this page has links to Docker official documents on volumes,
etc., which are necessary to run images such as this.
This post also makes use of PostgreSQL Server password file, whose official
documentation is
34.16. The Password File.
The objectives of this post are rather basic. ❶, getting the Docker
container to store the data in a specific location on the host, of my
own choosing. ❷, implementing the password file on the host and pass
it to the Docker container as per official documentation above.
Of course, the final goal is to connect to a PostgreSQL server
running in a Docker container with whatever clients we need.
PostreSQL Server Docker official image — version 15.1 (Debian 15.1-1.pgdg110+1).
Windows 10 Pro — version 10.0.19045 Build 19045.
Ubuntu — version 22.10 kinetic. The machine it runs
on is an older HP Pavilion laptop. The name of this machine is
HP-Pavilion-15, the rest of this post will use this
name and Ubuntu 22.10 interchangeably.
Windows 10 pgAdmin 4 — version 6.18. Older
versions might not work: when trying to connect, they fail with different
errors.
On Windows 10, “docker” CLI ( Docker Engine ) — version 20.10.17.
On Ubuntu 22.10, “docker” CLI ( Docker Engine ) — version 20.10.22.
Since I already have PostgreSQL Server 14 installed on Windows 10 Pro,
I have to turn its service process off, before setting up another server
in Docker container.
❶ I select to store PostgreSQL data in D:\docker_data\postgresql\.
After creating this directory path, on the docker run command,
it can be mounted as:
My trial and error runs show that the host directory,
which is D:\docker_data\postgresql\ translated to
//d/docker_data/postgresql in this case, must be
COMPLETELY empty, otherwise Docker raises an error.
The image has already been loaded when first pulled. The run command is:
Please note that,I have to do two ( 2 ) commands to
get the password file to work. I did try to run only the final command on
the empty //d/docker_data/postgresql, it did not work.
Please try for yourself.
The obvious question is, can we store the password file in a directory
other than the mounted host data directory D:\docker_data\postgresql\?
I don’t know if it is possible, if it is possible, then I don’t know how
to do it yet.
To connect pgAdmin 4 to the just set up
Docker PostgresSQL Server 15.1, register a new server as:
Host name/address: localhost
Port: 5432.
Username: postgres — I am using the default as per official document.
Password: pcb.2176310315865259
Please note, the Windows 10 version of pgAdmin 4 is 6.18.
Older versions might not work: when trying to connect, they fail with different
errors.
Docker PostgresSQL Server 15.1 is now ready in Windows 10.
On Ubuntu 22.10, I did not do any of the trial and error runs
as Windows 10. I assume that, what do not work on Windows 10,
will also not work on Ubuntu 22.10.
❶ Copy the image to /home/behai/Public/docker-images/,
then load the image with:
❷ I want to store data under /home/behai/Public/database/postgresql/,
create the directories database/postgresql/ under /home/behai/Public/,
and run the first command:
❺ Updated on 16/01/2023 — open port 5432 for external access:
$ sudo ufw allow from any to any port 5432 proto tcp
Since this is a development environment, there is no IP address restriction
applied, in a production environment, I imagine only certain IP addresses are
allowed. Please be mindful of this.
16/01/2023 update ends.
From Windows 10, to connect pgAdmin 4
to Docker PostgresSQL Server 15.1 running on HP-Pavilion-15, register a new server:
Host name/address: HP-Pavilion-15 — it’s better to use
the machine name, since IP addresses can change.
Port: 5432.
Username: postgres — I am using the default as per official document.
Password: pcb.2176310315865259
Please note, the Windows 10 version of pgAdmin 4 is 6.18.
Older versions might not work: when trying to connect, they fail with different
errors.
I already have PostgreSQL Server 14 installed on Windows 10 Pro.
I back up a development database ompdev from this server,
and restore the backup data to Docker PostgreSQL Server 15.1 running
on Ubuntu 22.10: machine name HP-Pavilion-15.
Please note, the above command will not have the create database statement in
the dump file, on the target server, we need to manually create a database to
restore to.
Restoring to HP-Pavilion-15 involves two simple steps.
⓵ Connect pgAdmin 4 to Docker PostgreSQL Server on HP-Pavilion-15,
as discussed. Then create a new
database with:
CREATE DATABASE ompdev;
Please note, it does not have to be pgAdmin 4, we can use
any other client tools available.
If everything goes well, we should now have the database restored and ready for connection
on Docker PostgreSQL Server 15.1 running on Ubuntu 22.10. The below screen capture showing
the ompdev database restored on HP-Pavilion-15:
Docker Compose: how to wait for the MySQL server container to be ready? —
Waiting for a database server to be ready before starting our own application, such as a middle-tier server, is a familiar issue. Docker Compose is no exception. Our own application container must also wait for their own database server container ready to accept requests before sending requests over. I’ve tried two ( 2 ) “wait for” tools which are officially recommended by Docker. I’m discussing my attempts in this post, and describing some of the pending issues I still have.
Synology DS218: unsupported Docker installation and usage… —
Synology does not have Docker support for AArch64 NAS models. DS218 is an AArch64 NAS model. In this post, we’re looking at how to install Docker for unsupported Synology DS218, and we’re also conducting tests to prove that the installation works.
Python: Docker image build — install required packages via requirements.txt vs editable install. —
Install via requirements.txt means using this image build step command “RUN pip3 install -r requirements.txt”. Editable install means using the “RUN pip3 install -e .” command. I’ve experienced that install via requirements.txt resulted in images that do not run, whereas using editable install resulted in images that do work as expected. I’m presenting my findings in this post.
Python: Docker image build — “the Werkzeug” problem 🤖! —
I’ve experienced Docker image build installed a different version of the Werkzeug dependency package than the development editable install process. And this caused the Python project in the Docker image failed to run. Development editable install means running the “pip3 install -e .” command within an active virtual environment. I’m describing the problem and how to address it in this post.
Python: Docker volumes — where is my SQLite database file? —
The Python application in a Docker image writes some data to a SQLite database. Stop the container, and re-run again, the data are no longer there! A volume must be specified when running an image to persist the data. But where is the SQLite database file, in both Windows 10 and Linux? We’re discussing volumes and where volumes are on disks for both operating systems.
Docker on Windows 10: mysql:8.0.30-debian log files —
Running the Docker Official Image mysql:8.0.30-debian on my Windows 10 Pro host machine, I want to log all queries, slow queries and errors to files on the host machine. In this article, we’re discussing how to go about achieving this.
pgloader Docker: migrating from Docker & localhost MySQL to localhost PostgreSQL. —
Using the latest dimitri/pgloader Docker image build, I’ve migrated a Docker MySQL server 8.0.30 database, and a locally installed MySQL server 5.5 database to a locally installed PostgreSQL server 14.3 databases. I am discussing how I did it in this post.
A repo was tagged, then some files were removed. Are those removed files still available for cloning (downloading) at the tagged version?
Let’s elaborate the question a little bit more. My project is fully functional,
I version-tagged its GitHub repo with “v1.0.0”. I continue working
on this project. In the process, I have made some modules at v1.0.0
obsolete, and I removed these. At a later date, I clone tag v1.0.0
to my local machine; do I actually have the modules that were removed?
I take no responsibilities for any damages or losses resulting from
applying the procedures outlined in this post.
— The answer is yes; the removed files associated with version-tagged
v1.0.0 is still available. My verification attempts are
discussed below.
✿✿✿
❶ I created a new repo https://github.com/behai-nguyen/learn-git.git;
my local working directory is D:\learn-git, there are two (2) files in this
directory 01-mysqlconnector.py and 02-mysqlclient.py.
⓵ Initialise the repo and check the two (2) files in:
git init
git config user.name "behai-nguyen"
git config user.email "behai_nguyen@hotmail.com"
git add .
git commit -m "Two (2) files to be tagged v1.0.0."
git branch -M main
git remote add origin https://github.com/behai-nguyen/learn-git.git
git push -u origin main
⓶ Version-tag the repo with v1.0.0:
git tag -a v1.0.0 -m "First version: 01-mysqlconnector.py and 02-mysqlclient.py."
git push origin --tags
My local working directory D:\learn-git and my repo:
❷ Remove 01-mysqlconnector.py from my local directory, and repo:
git rm -f 01-mysqlconnector.py
git commit -m "Obsolete."
git branch -M main
git push -u origin main
Manually verify that it was removed from both local directory and repo.
❸ Now, clone version-tagged v1.0.0 to ascertain if 01-mysqlconnector.py
is still available. My working drive is E:, and it should not have directory
learn-git: