CI/CD #01. Jenkins: manually clone a Python GitHub repo and run Pytest.

I’m describing the steps required to manually get Jenkins to: ⓵ clone a Python project GitHub repository, ⓶ create a virtual environment, ⓷ run editable install, and finally, ⓸ run Pytest for all tests.

Manual, in the context of this post, means we have to click something on the Jenkins UI to get a build going.

I’ve installed Jenkins 2.388 LTS on my Ubuntu 22.10. The installation process is pretty straight forward, I’m including the installation description in the last part of this post.

Once successfully installed and running, we can access Jenkins front-end at http://<machine-ip-address><machine-name>:8080. In my case, from Windows 10, http://hp-pavilion-15:8080/, or on Ubuntu 22.10 where Jenkins is installed, http://localhost:8080.

As usual, I had failed several times before getting my first project going. I am documenting what I’ve learned in this post, so it is by no means a tutorial. I’m planning to write more on it as I go along, so that is why CI/CD #01 is in the title.

I’m going back to this https://github.com/behai-nguyen/app-demo.git repo for this post. We’ll set up a Jenkins project to carry out the four (4) steps outlined in the introduction. Let’s get to it.

❶ Log into Jenkins using the admin user name and password created as part of the installation. Click on Dashboard located on top left hand side corner; then click on + New Item underneath.

❷ On the next page:

  • Enter an item name: app_demo — this name matches the directory name of the project. (In hindsight, the name of the repo should’ve been app_demo instead of app-demo!)
  • Then select the first option Freestyle project.
  • Finally, click on the OK button to move to the Configure page.

❸ On the Configure page:

  • Under General, enter something meaningful for Description, e.g.: “Try Jenkins on app-demo repo.”
  • Under Source Code Management:
    • Select Git. Then,
    • For Repository URL, enter https://github.com/behai-nguyen/app-demo.git. Since this is a public repo, anybody can clone it, we don’t need any credential to access it.
    • For Branch Specifier (blank for ‘any’), enter */main — since we are interested in the main branch of this repo.
  • Scrolling down, under Build Steps, drop down Add build step, then select Execute shell. Enter the following content:
    PYENV_HOME=$WORKSPACE/venv
    
    # Delete previously built virtualenv
    if [ -d $PYENV_HOME ]; then
        rm -rf $PYENV_HOME
    fi
    
    # Create virtualenv and install necessary packages
    virtualenv $PYENV_HOME
    . $PYENV_HOME/bin/activate
    $PYENV_HOME/bin/pip install -e .
    $PYENV_HOME/bin/pytest
    

    This script starts off by deleting the existing virtual environment; create a new one; activate it; then editable install all required packages; finally, run Pytest.

    Note, because of how the script works, it can use a lot of data depending on how many packages are in the project.

  • Finally, click on the Save button to move to the project page. The breadcrumb on the top left hand corner should now show Dashboard > app_demo >.

❹ Underneath Dashboard, the fourth (4th) item is ▷ Build Now. Click on ▷ Build Now to build!

With a bit of luck 😂, it should “build” successful, the screen should look like:

Underneath Build History, there is a green tick preceding #1, which indicates that this build has been successful. In case of a failure, it’s a red x. In either case, clicking on #1, will go to the build detail screen, then on the left hand side, clicking on Console Output will show the full log of the build — which is very informatively rich. On the failure ones, I have been able to use the information to get rid of the problems.

❺ Let’s look at what happens on disk. Jenkins’ work directory is /var/lib/jenkins/workspace/:

app_demo can be seen on top of the list, it has been created by Jenkins during the build. It is a ready to use Python development environment. Let’s go to it, and activate the virtual environment:

$ cd /var/lib/jenkins/workspace/app_demo/
$ source venv/bin/activate

The virtual environment was activated successfully:

Let’s run Pytest. All tests should just pass:

$ venv/bin/pytest

All tests passed as should be the case. Deactivate the virtual environment with:

$ deactivate

Tutorial References

  1. Jenkins and Python.
  2. YouTube: How Do I Run a Python Script From Jenkins Pipeline?

Jenkins Installation

I tried installing Jenkins on my Ubuntu 22.10 three (3) times, all went smoothly. But all failed to start. The last installation instruction I have used is from this link https://community.jenkins.io/t/ubuntu-20-04-initial-jenkins-startup-failure/1419 — the answer by Mr. Mark Waite — Jenkins Governance Board.

Basically, run these commands one after the other:

$ sudo apt-get install openjdk-11-jdk-headless
$ curl -fsSL https://pkg.jenkins.io/debian/jenkins.io.key | sudo tee /usr/share/keyrings/jenkins-keyring.asc > /dev/null
$ echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] https://pkg.jenkins.io/debian binary/ | sudo tee /etc/apt/sources.list.d/jenkins.list > /dev/null
$ sudo apt-get update
$ sudo apt-get install jenkins

But it still failed to start. I had forgotten that I have set port 8080 for Apache2. I free up the port 8080 and start Jenkins with:

$ systemctl start jenkins.service

And it starts with no problem. I am guessing the previous two installations were also successful. But it does not matter now.

After successfully installed, we need to do some initial configurations. I just followed the instructions in this DigitalOcean article How To Install Jenkins on Ubuntu 22.04.

We can access Jenkins front-end at http://<machine-ip-address><machine-name>:8080. In my case, from Windows 10, http://hp-pavilion-15:8080/, or on Ubuntu 22.10 where Jenkins is installed, http://localhost:8080.

✿✿✿

I like Jenkins thus far, it makes sense. I have worked in an environment where the build server and the unit test server are two VMware machines. The build process is written using Windows Powershell script, it also gets source codes from a source code management software installed in-house. Jenkins offers same capability, but the process seems much simpler. I hope you find the information useful. Thank you for reading and stay safe as always.

Python: class attributes, some behaviours we should be aware of.

We look at some behaviours of class attributes which can help to make life easier for us — mere programmers.

We will look at the following three (3) behaviours:

  1. From the Python official documentation:

    9.4. Random Remarks

    If the same attribute name occurs in both an instance and in a class, then attribute lookup prioritizes the instance.

    https://docs.python.org/3/tutorial/classes.html:
  2. Setting the value of a class attribute via the class will propagate the new value to class instances, whom have not overridden the value of this class attribute. This is in conformance with the documentation above.
  3. Setting the value of a class attribute via the class will propagate the new value down to child classes, but no vice versa.

Let’s explore these points via examples.

❶ Attribute lookup prioritises the instance.

This is an example from the documentation page quoted above, I have tweaked it a tiny bit.

class Warehouse:
    purpose = 'Storage'
    region = 'west'

Then:

w1 = Warehouse()
print("1: ", w1.purpose, w1.region)

Output — these are default class attributes’ values:

1:  Storage west
w2 = Warehouse()
w2.region = 'east'
print("2: ", w2.purpose, w2.region)

We just instantiate an instance of Warehouse, then override the value of the region attribute with 'east' (*):

2:  Storage east

(*): please note, what I’ve just written above might not be correct… According to the quoted documentation, the statement w2.region = 'east' might actually mean assigning the new attribute region to instance w2, rather than override as I’ve written.

❷ Setting the value via class propagates the new value to instances whom have not provided their own value.

We continue with examples in ❶:

Warehouse.region = 'north'
w3 = Warehouse()
print("3: ", w3.purpose, w3.region)

Instance w3 is created with whatever the class attributes’ values of Warehouse class:

3:  Storage north

Setting Warehouse.region = 'north', how does this affect the existing two (2) instances w1 and w2?

print(f"4: w1.region: {w1.region}, w2.region: {w2.region}")
4: w1.region: north, w2.region: east

w1 has not set its own value for the region attribute, setting the new value via the class Warehouse does propagate back to instance w1. w2, on the hand, has set its own, so it was not affected.

❸ Setting the value propagates from the parent class to child classes, but not vice versa.

Consider the following classes:

class Engine:
    started = False;

class TwoStrokeEngine(Engine):
    pass
	
class FourStrokeEngine(Engine):
    pass

In their initial state, started is False for all classes:

print(f"1. Engine.started: {Engine.started}")
print(f"1. TwoStrokeEngine.started: {TwoStrokeEngine.started}")
print(f"1. FourStrokeEngine.started: {FourStrokeEngine.started}\n")
1. Engine.started: False
1. TwoStrokeEngine.started: False
1. FourStrokeEngine.started: False

Let’s set Engine.started to True:

Engine.started = True

print(f"2. Engine.started: {Engine.started}")
print(f"2. TwoStrokeEngine.started: {TwoStrokeEngine.started}")
print(f"2. FourStrokeEngine.started: {FourStrokeEngine.started}\n")
2. Engine.started: True
2. TwoStrokeEngine.started: True
2. FourStrokeEngine.started: True

Let’s switch Engine.started back to False:

Engine.started = False

print(f"3. Engine.started: {Engine.started}")
print(f"3. TwoStrokeEngine.started: {TwoStrokeEngine.started}")
print(f"3. FourStrokeEngine.started: {FourStrokeEngine.started}\n")
3. Engine.started: False
3. TwoStrokeEngine.started: False
3. FourStrokeEngine.started: False

Let’s set FourStrokeEngine.started to True:

FourStrokeEngine.started = True

print(f"4. Engine.started: {Engine.started}")
print(f"4. TwoStrokeEngine.started: {TwoStrokeEngine.started}")
print(f"4. FourStrokeEngine.started: {FourStrokeEngine.started}\n")
4. Engine.started: False
4. TwoStrokeEngine.started: False
4. FourStrokeEngine.started: True

— We can see that, setting the value propagates from the parent class to child classes, but not vice versa.

What about their instances? Continue on with the examples above:

"""
FourStrokeEngine.started is True from above.
"""

engine = Engine()
two_stroke_engine = TwoStrokeEngine()
four_stroke_engine = FourStrokeEngine()
four_stroke_engine1 = FourStrokeEngine()

print(f"5. engine.started: {engine.started}")
print(f"5. two_stroke_engine.started: {two_stroke_engine.started}")
print(f"5. four_stroke_engine.started: {four_stroke_engine.started}")
print(f"5. four_stroke_engine1.started: {four_stroke_engine1.started}\n")

Engine.started = True

print(f"6. engine.started: {engine.started}")
print(f"6. two_stroke_engine.started: {two_stroke_engine.started}")
print(f"6. four_stroke_engine.started: {four_stroke_engine.started}")
print(f"6. four_stroke_engine1.started: {four_stroke_engine1.started}\n")

Output:

5. engine.started: False
5. two_stroke_engine.started: False
5. four_stroke_engine.started: True
5. four_stroke_engine1.started: True

6. engine.started: True
6. two_stroke_engine.started: True
6. four_stroke_engine.started: True
6. four_stroke_engine1.started: True

Let’s set TwoStrokeEngine.started to False, and see what happens to existing instances:

TwoStrokeEngine.started = False

print(f"7. engine.started: {engine.started}")
print(f"7. two_stroke_engine.started: {two_stroke_engine.started}")
print(f"7. four_stroke_engine.started: {four_stroke_engine.started}")
print(f"7. four_stroke_engine1.started: {four_stroke_engine1.started}\n")
7. engine.started: True
7. two_stroke_engine.started: False
7. four_stroke_engine.started: True
7. four_stroke_engine1.started: True

It makes sense that only two_stroke_engine.started was affected.

I did get caught out on some of these issues… And hence this post. I do hope you find this post useful. Thank you for reading and stay safe as always.

Using PostgreSQL Official Docker image on Windows 10 and Ubuntu 22.10 kinetic.

Discussing a basic set up process to use the PostgreSQL Official Docker image on Windows 10 Pro, and Ubuntu 22.10 kinetic running on an older HP laptop. Then backup a PostgreSQL database on Windows 10 Pro machine, and restore this backup database to the newly set up Docker PostgreSQL Server 15.1 on the Ubuntu 22.10 machine.

The PostgreSQL Server Docker official images are at this address postgres Docker Official Image.

This is the full documentation for these images. Please note, this page has links to Docker official documents on volumes, etc., which are necessary to run images such as this.

This post also makes use of PostgreSQL Server password file, whose official documentation is 34.16. The Password File.

The objectives of this post are rather basic. ❶, getting the Docker container to store the data in a specific location on the host, of my own choosing. ❷, implementing the password file on the host and pass it to the Docker container as per official documentation above.

Of course, the final goal is to connect to a PostgreSQL server running in a Docker container with whatever clients we need.

Table of contents

Downloading and Storing the Image Locally

To download:

E:\docker-images>docker pull postgres:latest

To save the image locally to E:\docker-images\:

E:\docker-images>docker save postgres:latest --output postgres-latest.tar

postgres-latest.tar is also used in Ubuntu 22.10 later on. This Docker image contains PostreSQL Server version 15.1 (Debian 15.1-1.pgdg110+1).

Environments

  1. PostreSQL Server Docker official image — version 15.1 (Debian 15.1-1.pgdg110+1).
  2. Windows 10 Pro — version 10.0.19045 Build 19045.
  3. Ubuntu — version 22.10 kinetic. The machine it runs on is an older HP Pavilion laptop. The name of this machine is HP-Pavilion-15, the rest of this post will use this name and Ubuntu 22.10 interchangeably.
  4. Windows 10 pgAdmin 4 — version 6.18. Older versions might not work: when trying to connect, they fail with different errors.
  5. On Windows 10, “docker” CLI ( Docker Engine ) — version 20.10.17.
  6. On Ubuntu 22.10, “docker” CLI ( Docker Engine )version 20.10.22.

On Windows 10

Since I already have PostgreSQL Server 14 installed on Windows 10 Pro, I have to turn its service process off, before setting up another server in Docker container.

❶ I select to store PostgreSQL data in D:\docker_data\postgresql\. After creating this directory path, on the docker run command, it can be mounted as:

--mount type=bind,source=//d/docker_data/postgresql,target=/var/lib/postgresql/data

My trial and error runs show that the host directory, which is D:\docker_data\postgresql\ translated to //d/docker_data/postgresql in this case, must be COMPLETELY empty, otherwise Docker raises an error.

The image has already been loaded when first pulled. The run command is:

docker run -d -it -p 5432:5432 --name postgresql-docker -e POSTGRES_PASSWORD=pcb.2176310315865259 --mount type=bind,source=//d/docker_data/postgresql,target=/var/lib/postgresql/data postgres:latest

❷ Now, stop and remove the postgresql-docker container:

C:\>docker stop postgresql-docker
C:\>docker rm postgresql-docker

Verify that container postgresql-docker has been removed, run:

C:\>docker ps -a

CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

I have no other containers, your list is likely to be different. We should be able to confirm that postgresql-docker is not in the list anymore.

Initial PostreSQL Server files and folders should now be created under D:\docker_data\postgresql\, around 23 ( twenty three ) items, most are folders.

❸ Now create password file secrets\pgpass.conf under D:\docker_data\postgresql\:

🐘 Content of D:\docker_data\postgresql\secrets\pgpass.conf:

localhost:5432:postgres:postgres:pcb.2176310315865259

As per official documentation, the password file is passed to the container as:

-e POSTGRES_PASSWORD_FILE=/var/lib/postgresql/data/secrets/pgpass.conf

Recall that /var/lib/postgresql/data/ is the host translated directory //d/docker_data/postgresql in the first mount:

--mount type=bind,source=//d/docker_data/postgresql,target=/var/lib/postgresql/data

❹ The final command is, then:

docker run -d -it -p 5432:5432 --name postgresql-docker --mount type=bind,source=//d/docker_data/postgresql,target=/var/lib/postgresql/data -e POSTGRES_PASSWORD_FILE=/var/lib/postgresql/data/secrets/pgpass.conf postgres:latest

Please note that,I have to do two ( 2 ) commands to get the password file to work. I did try to run only the final command on the empty //d/docker_data/postgresql, it did not work. Please try for yourself.

The obvious question is, can we store the password file in a directory other than the mounted host data directory D:\docker_data\postgresql\? I don’t know if it is possible, if it is possible, then I don’t know how to do it yet.

To connect pgAdmin 4 to the just set up Docker PostgresSQL Server 15.1, register a new server as:

  1. Host name/address: localhost
  2. Port: 5432.
  3. Username: postgres — I am using the default as per official document.
  4. Password: pcb.2176310315865259

Please note, the Windows 10 version of pgAdmin 4 is 6.18. Older versions might not work: when trying to connect, they fail with different errors.

Docker PostgresSQL Server 15.1 is now ready in Windows 10.

On Ubuntu 22.10 kinetic

On Ubuntu 22.10, I did not do any of the trial and error runs as Windows 10. I assume that, what do not work on Windows 10, will also not work on Ubuntu 22.10.

❶ Copy the image to /home/behai/Public/docker-images/, then load the image with:

behai@HP-Pavilion-15:~$ sudo docker load --input /home/behai/Public/docker-images/postgres-latest.tar

❷ I want to store data under /home/behai/Public/database/postgresql/, create the directories database/postgresql/ under /home/behai/Public/, and run the first command:

$ sudo docker run -d -it -p 5432:5432 --name postgresql-docker -e POSTGRES_PASSWORD=pcb.2176310315865259 --mount type=bind,source=/home/behai/Public/database/postgresql,target=/var/lib/postgresql/data postgres:latest

❸ Then stop and remove the postgresql-docker container:

behai@HP-Pavilion-15:~$ sudo docker stop postgresql-docker
behai@HP-Pavilion-15:~$ sudo docker rm postgresql-docker

Verify that the Docker container postgresql-docker has been removed:

behai@HP-Pavilion-15:~$ sudo docker ps -a

CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

Even if the list is not empty, postgresql-docker should not be in the container list.

Initial PostreSQL Server files and folders should now be created under /home/behai/Public/database/postgresql/:

❹ Now create secrets/pgpass.conf under /home/behai/Public/database/postgresql/:

🐘 Content of /home/behai/Public/database/postgresql/secrets/pgpass.conf:

localhost:5432:postgres:postgres:pcb.2176310315865259

The password file is passed to the Docker container as:

-e POSTGRES_PASSWORD_FILE=/var/lib/postgresql/data/secrets/pgpass.conf

/var/lib/postgresql/data/ is the host directory /home/behai/Public/database/postgresql/ in the first mount:

--mount type=bind,source=/home/behai/Public/database/postgresql,target=/var/lib/postgresql/data

Final command:

$ sudo docker run -d -it -p 5432:5432 --name postgresql-docker --mount type=bind,source=/home/behai/Public/database/postgresql,target=/var/lib/postgresql/data -e POSTGRES_PASSWORD_FILE=/var/lib/postgresql/data/secrets/pgpass.conf postgres:latest

Updated on 16/01/2023 — open port 5432 for external access:

$ sudo ufw allow from any to any port 5432 proto tcp

Since this is a development environment, there is no IP address restriction applied, in a production environment, I imagine only certain IP addresses are allowed. Please be mindful of this.

16/01/2023 update ends.

From Windows 10, to connect pgAdmin 4 to Docker PostgresSQL Server 15.1 running on HP-Pavilion-15, register a new server:
  1. Host name/address: HP-Pavilion-15 — it’s better to use the machine name, since IP addresses can change.
  2. Port: 5432.
  3. Username: postgres — I am using the default as per official document.
  4. Password: pcb.2176310315865259

Please note, the Windows 10 version of pgAdmin 4 is 6.18. Older versions might not work: when trying to connect, they fail with different errors.

Backup and Restore a Database

I already have PostgreSQL Server 14 installed on Windows 10 Pro. I back up a development database ompdev from this server, and restore the backup data to Docker PostgreSQL Server 15.1 running on Ubuntu 22.10: machine name HP-Pavilion-15.

The database backup command:

"C:\Program Files\PostgreSQL\14\bin\pg_dump.exe" postgresql://postgres:top-secret@localhost/ompdev > ompdev_pg_database.sql

Please note, the above command will not have the create database statement in the dump file, on the target server, we need to manually create a database to restore to.

Restoring to HP-Pavilion-15 involves two simple steps.

⓵ Connect pgAdmin 4 to Docker PostgreSQL Server on HP-Pavilion-15, as discussed. Then create a new database with:

CREATE DATABASE ompdev;

Please note, it does not have to be pgAdmin 4, we can use any other client tools available.

⓶ Then run the following restore command:

"C:\Program Files\PostgreSQL\14\bin\psql.exe" postgresql://postgres:pcb.2176310315865259@HP-Pavilion-15/ompdev < ompdev_pg_database.sql

If everything goes well, we should now have the database restored and ready for connection on Docker PostgreSQL Server 15.1 running on Ubuntu 22.10. The below screen capture showing the ompdev database restored on HP-Pavilion-15:

Other Docker Posts Which I’ve Written

  1. Docker Compose: how to wait for the MySQL server container to be ready? — Waiting for a database server to be ready before starting our own application, such as a middle-tier server, is a familiar issue. Docker Compose is no exception. Our own application container must also wait for their own database server container ready to accept requests before sending requests over. I’ve tried two ( 2 ) “wait for” tools which are officially recommended by Docker. I’m discussing my attempts in this post, and describing some of the pending issues I still have.
  2. Synology DS218: unsupported Docker installation and usage… — Synology does not have Docker support for AArch64 NAS models. DS218 is an AArch64 NAS model. In this post, we’re looking at how to install Docker for unsupported Synology DS218, and we’re also conducting tests to prove that the installation works.
  3. Python: Docker image build — install required packages via requirements.txt vs editable install. — Install via requirements.txt means using this image build step command “RUN pip3 install -r requirements.txt”. Editable install means using the “RUN pip3 install -e .” command. I’ve experienced that install via requirements.txt resulted in images that do not run, whereas using editable install resulted in images that do work as expected. I’m presenting my findings in this post.
  4. Python: Docker image build — “the Werkzeug” problem 🤖! — I’ve experienced Docker image build installed a different version of the Werkzeug dependency package than the development editable install process. And this caused the Python project in the Docker image failed to run. Development editable install means running the “pip3 install -e .” command within an active virtual environment. I’m describing the problem and how to address it in this post.
  5. Python: Docker image build — save to and load from *.tar files. — We can save Docker images to local *.tar files, and later load and run those Docker images from local *.tar files. I’m documenting my learning experimentations in this post.
  6. Python: Docker volumes — where is my SQLite database file? — The Python application in a Docker image writes some data to a SQLite database. Stop the container, and re-run again, the data are no longer there! A volume must be specified when running an image to persist the data. But where is the SQLite database file, in both Windows 10 and Linux? We’re discussing volumes and where volumes are on disks for both operating systems.
  7. Docker on Windows 10: running mysql:8.0.30-debian with a custom config file. — Steps required to run the official mysql:8.0.30-debian image on Windows 10 with custom config file E:\mysql-config\mysql-docker.cnf.
  8. Docker on Windows 10: mysql:8.0.30-debian log files — Running the Docker Official Image mysql:8.0.30-debian on my Windows 10 Pro host machine, I want to log all queries, slow queries and errors to files on the host machine. In this article, we’re discussing how to go about achieving this.
  9. pgloader Docker: migrating from Docker & localhost MySQL to localhost PostgreSQL. — Using the latest dimitri/pgloader Docker image build, I’ve migrated a Docker MySQL server 8.0.30 database, and a locally installed MySQL server 5.5 database to a locally installed PostgreSQL server 14.3 databases. I am discussing how I did it in this post.

Thank you for reading and stay safe as always.

GitHub: are removed version-tagged files still available for downloading?

A repo was tagged, then some files were removed. Are those removed files still available for cloning (downloading) at the tagged version?

Let’s elaborate the question a little bit more. My project is fully functional, I version-tagged its GitHub repo with “v1.0.0”. I continue working on this project. In the process, I have made some modules at v1.0.0 obsolete, and I removed these. At a later date, I clone tag v1.0.0 to my local machine; do I actually have the modules that were removed?

I take no responsibilities for any damages or losses resulting from applying the procedures outlined in this post.

— The answer is yes; the removed files associated with version-tagged v1.0.0 is still available. My verification attempts are discussed below.

✿✿✿

❶ I created a new repo https://github.com/behai-nguyen/learn-git.git; my local working directory is D:\learn-git, there are two (2) files in this directory 01-mysqlconnector.py and 02-mysqlclient.py.

⓵ Initialise the repo and check the two (2) files in:

git init

git config user.name "behai-nguyen"
git config user.email "behai_nguyen@hotmail.com"

git add .
git commit -m "Two (2) files to be tagged v1.0.0."

git branch -M main
git remote add origin https://github.com/behai-nguyen/learn-git.git
git push -u origin main

⓶ Version-tag the repo with v1.0.0:

git tag -a v1.0.0 -m "First version: 01-mysqlconnector.py and 02-mysqlclient.py."
git push origin --tags

My local working directory D:\learn-git and my repo:

❷ Remove 01-mysqlconnector.py from my local directory, and repo:

git rm  -f 01-mysqlconnector.py
git commit -m "Obsolete."
git branch -M main
git push -u origin main

Manually verify that it was removed from both local directory and repo.

❸ Now, clone version-tagged v1.0.0 to ascertain if 01-mysqlconnector.py is still available. My working drive is E:, and it should not have directory learn-git:

git clone -b v1.0.0 https://github.com/behai-nguyen/learn-git.git

01-mysqlconnector.py is still available for version-tagged v1.0.0:

❹ Now add a new file 03-pymysql.py, and then create a new version-tag v1.0.1.

⓵ Add the new file 03-pymysql.py, the working directory is D:\learn-git:

git add 03-pymysql.py
git commit -m "Test package pymysql."
git push -u origin main

⓶ Create the new version-tag v1.0.1:

git tag -a v1.0.1 -m "Second version: 02-mysqlclient.py and 03-pymysql.py."
git push origin --tags

❺ At this point:

⓵ With clone command for v1.0.0:

git clone -b v1.0.0 https://github.com/behai-nguyen/learn-git.git

We should get:

  1. 01-mysqlconnector.py
  2. 02-mysqlclient.py

⓶ While downloading v1.0.1:

git clone -b v1.0.1 https://github.com/behai-nguyen/learn-git.git

We should get:

  1. 02-mysqlclient.py
  2. 03-pymysql.py

I needed to know this for myself. I hope you find this useful as I do. Thank you for reading and stay safe as always.

Design a site like this with WordPress.com
Get started