How I Learned Docker from Scratch 2026 (And How You Can Too)
Look, I’m gonna be straight with you. When I first heard about Docker three years ago, I had no clue what everyone was going on about. Containers? Images? What does that even mean?
I’m Likhon Hussain. These days I work at HostGet Cloud Computing Company as a Senior Operations Executive, dealing with cloud stuff, AI/ML projects, and SaaS platforms. But back then? I was just another developer trying to figure out why my code worked perfectly on my laptop but crashed every single time in production. Sound familiar?
The Day Docker Finally Made Sense
So there I was, working on this Node.js app. Spent two weeks building it. Everything worked great on my machine. Pushed it to production and… boom. Dead. Wouldn’t even start. Turns out my laptop had Node 14. Production server? Node 16. Different versions, different behaviors, same frustration.
My coworker told me “Just use Docker.” I rolled my eyes. Another thing to learn, right? But man, was I wrong. Docker basically lets you pack up your code with everything it needs – the exact Node version, all the libraries, every little thing – into one package.
Then that package runs the same way everywhere. Your laptop, the production server, your teammate’s weird Linux setup. Doesn’t matter. It just works. That’s when it clicked for me.
Forget Everything You Think You Know About Learning Docker
Here’s what most people do wrong. They open up some 400-page Docker book or watch a 10-hour course and try to memorize everything. Then they wonder why they’re confused and nothing makes sense.
Don’t do that. I wasted a month doing exactly that. Reading documentation, taking notes, feeling like I was learning. But when I actually tried to use Docker? Total blank. Couldn’t do anything. Want to know what actually worked?
Week 1: Just Get Something Running
First week, forget building anything. Just get someone else’s container running on your computer.
Install Docker. Takes maybe 10 minutes. Then run this:
docker pull nginx
docker run -d -p 8080:80 nginx
Now open your browser and go to localhost:8080. See that “Welcome to nginx” page? You just ran a web server without installing anything. That’s Docker.
Do this with a few different things. Redis, PostgreSQL, whatever sounds interesting. The point isn’t to understand every flag and option. The point is to see that it actually works.
I spent my first week just pulling random images and running them. Felt like a kid with a new toy. And you know what? That hands-on messing around taught me more than all that documentation reading.
Week 2: Understanding What’s Actually Happening
After you’ve run a few containers, you’ll start wondering what’s going on under the hood. Images are basically templates. Like a recipe. Containers are what you get when you follow that recipe. You can make a hundred cookies from one recipe, right? Same deal. One image, many containers.
Docker Hub is where people share these image recipes. Need MySQL? Someone already made an image for that. Need Python? Yep, got that too. It’s like npm or pip, but for entire application environments.
This saved my butt at HostGet more times than I can count. New project needs five different services? Instead of spending two days installing and configuring everything, I just pulled the images I needed. Had everything running in 20 minutes.
Week 3: Building Your Own Container
Okay, now comes the fun part. Making your own Docker image. You need a file called Dockerfile. That’s where you write your recipe. Here’s one I wrote for a simple Python app:
dockerfile
FROM python:3.9-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "app.py"]
That’s it. Six lines. Let me break it down in plain English:
- Start with a Python 3.9 image
- Make a folder called /app
- Copy requirements.txt into it
- Install the requirements
- Copy everything else
- Run the app
Build it with docker build -t myapp . and run it with docker run myapp. Done.
My first Dockerfile was a mess. Had like 20 lines doing things the complicated way. Then I realized most Dockerfiles are just these same basic commands repeated. FROM, COPY, RUN, CMD. That’s like 80% of what you need.
The Stuff Nobody Tells You About
Here’s where I wish someone had given me a heads up earlier.
Ports Are Confusing at First
Your container has its own network. So when your app listens on port 5000 inside the container, you can’t just go to localhost:5000 and see it. You need to map it:
docker run -p 8080:5000 myapp
Now localhost:8080 on your computer connects to port 5000 in the container. Took me three days to figure this out. Don’t be me.
Data Disappears When Containers Die
Learned this one the hard way. Set up a database in a container, added a bunch of test data, felt proud of myself. Deleted the container and… yep. All the data gone.
Containers are temporary. You delete them, everything inside goes with them. If you want to keep data, you need volumes:
docker run -v mydata:/var/lib/postgresql/data postgres
Now your database data lives in a volume that survives even when you kill the container.
When You’ve Got Multiple Containers
Real projects aren’t just one container. You’ve got your app, your database, maybe Redis for caching, maybe a message queue. They all need to talk to each other. This is where Docker gets really useful but also where beginners get stuck.
Networks Are Simpler Than You Think
Create a network:
docker network create myapp-network
Run your containers on that network:
docker run --network myapp-network --name db postgres
docker run --network myapp-network --name web myapp
Now your web container can talk to the database using “db” as the hostname. No IP addresses, no complicated networking stuff. Just works.
Docker Compose Saves Your Sanity
After a week of typing out long docker run commands with 15 different flags, I discovered Docker Compose. Changed everything.
Instead of running five different commands, you write one YAML file:
yaml
services:
web:
build: .
ports:
- "8080:5000"
db:
image: postgres
volumes:
- dbdata:/var/lib/postgresql/data
volumes:
dbdata:
Then just docker-compose up. Everything starts. docker-compose down. Everything stops and cleans up. Beautiful.
At HostGet, every project uses Docker Compose. Makes life so much easier.
Mistakes I Made (So You Don’t Have To)
Using “latest” Tag
My first production deploy used FROM node:latest in the Dockerfile. Worked great for two months. Then one day the app just wouldn’t start. No code changes. Nothing.
Turns out “latest” had updated to a new major version with breaking changes. Spent four hours debugging before I figured it out.
Now I always use specific versions: FROM node:18.17-alpine. No surprises.
Giant Images
My first Docker image was 1.2GB. Why? Because I copied my entire project folder including node_modules, .git, test files, everything.
Use a .dockerignore file:
node_modules
.git
*.log
tests
And use Alpine-based images when possible. They’re tiny. Dropped my images from 1GB to 150MB.
Running Everything as Root
Did this for months before someone at work pointed out it’s a security risk. If someone breaks into your container and you’re running as root, they basically have admin access.
Add this to your Dockerfile:
dockerfile
RUN adduser -D appuser
USER appuser
Takes two seconds. Much safer.
Getting Production Ready
Once you’re comfortable with Docker locally, production is a different game. Here’s what matters:
Always scan for vulnerabilities. We use Docker Scout at HostGet. Run it before pushing to production:
docker scout quickview myapp:latest
Catches security issues before they become problems.
Keep secrets out of images. Never put passwords or API keys in your Dockerfile. Use environment variables:
docker run -e DATABASE_PASSWORD=secret myapp
Tag everything properly. We use semantic versioning: myapp:1.2.3. Makes it easy to roll back if something breaks.
Connecting Docker to Your Deployment Pipeline
We use GitHub Actions at HostGet. Every push to main automatically builds a Docker image, runs tests, and pushes to AWS ECR if tests pass.
Sounds complicated but it’s just a few lines in a YAML file. Automates everything. No manual builds, no “forgot to push the new image” mistakes.
What About Kubernetes?
Everyone asks me this. “Should I learn Kubernetes next?”
Maybe. Depends what you’re doing.
Docker is great for running containers. Kubernetes is for when you have tons of containers across multiple servers and need them to automatically scale, restart when they crash, and handle traffic.
At HostGet we use Kubernetes because we’re running hundreds of services. But for smaller projects? Docker Compose is plenty.
Learn Docker first. Get really comfortable with it. Then if you need Kubernetes, you’ll understand the fundamentals already.
How Long Does This Actually Take?
Honestly? You can be dangerous with Docker in two weeks. By dangerous I mean you can containerize basic apps and run them without breaking things.
Getting good? Maybe two months of regular use. You’ll hit problems, figure them out, learn what works and what doesn’t.
Being an expert? I’ve been using Docker for three years and I still learn new stuff. But that’s true of everything in tech, right?
My Actual Learning Path
Week 1: Ran pre-built containers. Nginx, Redis, PostgreSQL. Just played around.
Week 2: Read about how Docker works. Watched a few YouTube videos. Took notes.
Week 3: Containerized a simple Flask API I’d built. Took two days of frustrated googling but got it working.
Week 4: Started using Docker Compose. Containerized a project with multiple services.
Month 2: Used Docker for everything at work. Hit problems, asked coworkers, figured stuff out.
Month 3: Got comfortable. Could troubleshoot issues without googling everything.
That’s realistic. Not “learn Docker in a weekend” stuff. Real learning takes time.
Resources That Actually Helped
Docker’s official documentation is good once you understand the basics. Before that, it’s overwhelming. YouTube channels like TechWorld with Nana explain things in plain English. That helped a lot. Best resource though? Actually using it. Every project I containerized taught me more than any tutorial could.
My Final Thoughts
Docker isn’t magic. It’s just a tool. Useful tool, sure. But still just a tool. Start simple. Pull an image. Run it. See it work. Then build your own. Then use Compose. Take it step by step.
Three years ago I was intimidated by Docker. Now I use it every day at HostGet and can’t imagine working without it. You’ll get there too. Just start. You’ll make mistakes. Everyone does. That’s how you learn. Now stop reading and go run your first container already.
