Skip to main content

Starting with Docker and ASP.NET Core

I love the .NET world. But for me, it always felt strange that you can run your code only inside Windows ecosystem. But now, with .NET Core, you can develop and run .NET application on Linux. Now you can use all the benefits of the Linux platform. One of such benefits is the ability to use Docker natively.

Now all major cloud services offer direct Docker containers deploy. It means you shouldn't worry about dependencies and software versions - everything is inside your container. That's why I decided to learn about Docker and share it with you here.

In this tutorial, you will learn what are ASP.NET Core and Docker, and how to use them together.

ASP.NET Core

What is ASP.NET Core?

Microsoft defines ASP.NET Core as:
ASP.NET Core is a cross-platform, high-performance, open-source framework for building modern, cloud-based, Internet-connected applications.

What about .NET Core? 

NET  Core is a framework that supports ASP.NET. It is a cross-platform, open-source re-implementation of the .NET Framework. If you want to know more in depth, you can read the official description from Microsoft.

In this tutorial, we will build a sample ASP.NET Core MVC site as a demo and host it inside the Docker container. First we need to install .NET Core on your machine.

Install Net Core

  • Windows: you have to download and install the .NET SDK;
  • Linux: you can use the commands described here;  
  • macOS: download and install the pkg;
We can check if it installed by typing the next command in your console:
$ dotnet --version
If everything is ok, you will see the output like:
$ dotnet --version
2.2.203

Create ASP.NET Core application

For this guide, we will create a sample with ASP.NET MVC (implementation of a model-view-controller pattern) site.

Create a folder named NetCoreExample. Now open a terminal (or a command prompt in Windows) and navigate to the created directory. Type the following command:
$ dotnet new mvc
After this command directory structure will be like:

Once the project is created, we can run it by typing
$ dotnet run
It can take a little while to run but after that, it should say:
Hosting environment: Development
Content root path: {project directory}
Now listening on: https://localhost:5001
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.
Now you can open https://localhost:5001 or http://localhost:5000 in your browser. You will see something like:

Congratulations! Now we have running ASP.NET Core MVC site. Our next step is to make it run inside a Docker container. But first, let's try to know what is Docker container.

Docker

What is Docker?

Docker is a platform for developers and sysadmins to develop, deploy, and run applications with containers. Docker container helps to ensure that your application works seamlessly in any environment. And compared to virtual machines, containers do not have high overhead and hence enable more efficient usage of the underlying system and resources.

What are containers? 

A virtual machine (VM) runs a full-blown “guest” operating system with virtual access to host resources through a hypervisor. In general, VMs provide an environment with more resources than most applications need.

By contrast, a container runs natively on Linux and shares the kernel of the host machine with other containers. It runs a discrete process, taking no more memory than any other executable, making it lightweight.
In other words, instead of running full new OS inside the container, Docker re-uses resources of the host OS.

Install Docker

  •   Windows: you can download the installer and read install instructions here;
  • Linux: here you can find manuals for install on CentOS, Fedora, Debian and Ubuntu;
  • macOS: you can download package and read install instructions here;
When you are done installing Docker, you can verify that your Docker is installed correctly by running
hello-world image:
$ sudo docker run hello-world
This command downloads a test image and runs it in a container. When the container runs, it prints an informational message and exits.

Now when we have running ASP.NET Core site and Docker on your machine we can start containerizing your site.

Create a Dockerfile for an ASP.NET Core application

To add your app to the container first, we need to add a file with a name `Dockerfile` in the root of our project folder. Now add this text to your Dockerfile to use the DLL file of your project.
# First we add a dotnet SDK image to build our app inside the container
FROM microsoft/dotnet:sdk AS build-env
WORKDIR /app

# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore

# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out

# Here we use ASP.NET Core runtime to build runtime image
FROM microsoft/dotnet:aspnetcore-runtime
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "NetCoreExample.dll"]
To make your build context as small as possible add a `.dockerignore` file to your project folder and copy the following into it.
bin\
obj\

Build and run the Docker image

First, open your terminal and navigate to the root of our test project with docker file. To build a Docker image from our app run next command:
$ sudo docker build -t netcoreexample .
Where part `-t netcoreexample` give a tag name `netcoreexample` to the resulted image, in future, we will use this name to specify which container do we want to run. And the `.` in the end says that we should use the files located in the current folder.
The output for this command should be something like this:

It says `Successfully tagged netcoreexample:latest` which means we successfully have created an image with the name `netcoreexample:latest`. For more details about `docker build` command you can go here.

Now let's try to run it. To do this we should execute command:
$ sudo docker run -d -p 8080:80 --name example netcoreexample
Let's take a closer look at this command.
  • Option `-d` means that we want to run our container in background and print container ID.
  • Option `-p 8080:80` says that we want to bind the port 80 of the container to port 8080 of the host machine. It means that when you will access port 8080 of the host machine, in reality, you will see the output from the port 80 inside the container, where we host our site.
  • Option `--name example` assigns the name `example` to the container. 
  • And finally, `netcoreexample` is the name of the container we want to run (remember `-t netcoreexample` on the build step?). 
If you want to find more information about `docker run command` you can read it here.

The output of this command will be an ID of the created container, for example:
$ sudo docker run -d -p 8080:80 --name myapp netcoreexample
82995c0e787bd432eb5ff1bd6001b22593732a674b006b66fb19d016d48356ef
Now we can check if it's actually working! Let's got to the localhost:8080 in your browser. If everything is correct you will see our example site page:
 Have you noticed that now we are accessing our site not by http://localhost:5000 or https://localhost:5001 but http://localhost:8080?

What's next?

Now, when we have a running Docker container, we want to check the list of images we have on our machine, a list of running containers and how we can stop them, right?

Let's start with the list of images we have. To do this we can use command
$ sudo docker image ls
Now you should see all the images we've used in this tutorial:
$ sudo docker image ls
REPOSITORY          TAG                  IMAGE ID            CREATED             SIZE
netcoreexample      latest               8b535431f160        22 minutes ago      265MB
                             52160e57a4da        22 minutes ago      1.75GB
microsoft/dotnet    sdk                  913796b9a530        10 hours ago        1.74GB
microsoft/dotnet    aspnetcore-runtime   091bfd07b907        10 hours ago        260MB
hello-world         latest               fce289e99eb9        3 months ago        1.84kB
For more details on the command `docker image ls` you can read the official documentation here.

To see the list of running Docker containers you can use command `docker container ls`.
$ sudo docker container ls
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                  NAMES
82995c0e787b        netcoreexample      "dotnet NetCoreExamp…"   25 minutes ago      Up 25 minutes       0.0.0.0:8080->80/tcp   myapp
And finally, to stop running container we can use command `docker stop`. To specify which container to stop, this command uses a container name. In our case, it is `myapp`. Run:
$ sudo docker stop myapp
myapp
After this, if you will try to open your site page http://localhost:8080 or see the list of you running container `docker container ls` you won't find running and working example site. It means we successfully have stopped our container.

Conclusion

In this tutorial, we've created a simple ASP.NET Core app and put it into the container. Now we are able to run this final container on any supported by Docker platform without any additional changes on our side, which is quite useful.

Comments

  1. Very nice article, thanks! I do have a question however. When you start the .net core app the console displays the following output:
    Now listening on: https://localhost:5001
    Now listening on: http://localhost:5000
    This means that the .net core application runs on prots 5000 and 5001. But when we start the container we specify the port 8080 to redirect to container's port 80. So how does the container knows that the traffic to port 80 should actually go to the .net core application which by default is listening on ports 5000 and 5001 ?

    ReplyDelete
    Replies
    1. Ports 5001 and 5000 are used as a ports when you run your application in debug mode. But inside container we have a web server that runs application on default 80 port.

      Delete
    2. Thank you for the clarification!

      Delete
  2. FROM microsoft/dotnet:aspnetcore-runtime
    WORKDIR /app
    COPY --from=build-env /app/out .
    ENTRYPOINT ["dotnet", "NetCoreExample.d


    Why did we add these extra build commands when we have already put the commands for base image and dependancies? Can you please explain the significance?

    ReplyDelete
    Replies
    1. Seems like word build here is a bit confusing.
      Let's try to go through Docker file line-by-line:

      FROM microsoft/dotnet:sdk AS build-env
      WORKDIR /app

      here we get a docker image with only dotnet sdk and set our working directory to "app". This image is used to "build" c# code and publish files to a folder. It happens here:

      RUN dotnet restore
      RUN dotnet publish -c Release -o out

      After this operations in the subfolder of our work directory "app" (/app/out) we have all published files, required for run of our site. But "microsoft/dotnet:sdk" doesn't have an asp.net core runtime to run this site (we used this container only for building code), so now we need to create a container that will actually run our web site. To do this we will use this:

      FROM microsoft/dotnet:aspnetcore-runtime
      WORKDIR /app

      here we get a new docker image "microsoft/dotnet:aspnetcore-runtime" that includes ASP.NET Core runtime.

      COPY --from=build-env /app/out .
      ENTRYPOINT ["dotnet", "NetCoreExample.dll"]

      here we copy files from the first image and tell "microsoft/dotnet:aspnetcore-runtime" to use "NetCoreExample.dll" as an entry point of our web site.

      Long story short.
      In this docker file we are actually using 2 containers:
      1) dotnet:sdk for building and publishing our web site
      2) dotnet:aspnetcore-runtime to run files from the output of our 1) container.

      We are doing this to keep our containers clean, lightweight and simple. In other way we will have to include in the dotnet:aspnetcore-runtime dotnetcore sdk, which can add some hundreds of mega bytes to the resulting container.

      I hope this will help you. If you still have some questions, please, fill free to ask

      Delete

Post a Comment

Popular posts from this blog

How to Build TypeScript App and Deploy it on GitHub Pages

Quick Summary In this post, I will show you how to easily build and deploy a simple TicksToDate time web app like this: https://zubialevich.github.io/ticks-to-datetime .

Pros and cons of different ways of storing Enum values in the database

Lately, I was experimenting with Dapper for the first time. During these experiments, I've found one interesting and unexpected behavior of Dapper for me. I've created a regular model with string and int fields, nothing special. But then I needed to add an enum field in the model. Nothing special here, right? Long story short, after editing my model and saving it to the database what did I found out? By default Dapper stores enums as integer values in the database (MySql in my case, can be different for other databases)! What? It was a surprise for me! (I was using ServiceStack OrmLite for years and this ORM by default set's enums to strings in database) Before I've always stored enum values as a string in my databases! After this story, I decided to analyze all pros and cons I can imagine of these two different ways of storing enums. Let's see if I will be able to find the best option here.

Caching strategies

One of the easiest and most popular ways to increase system performance is to use caching. When we introduce caching, we automatically duplicate our data. It's very important to keep your cache and data source in sync (more or less, depends on the requirements of your system) whenever changes occur in the system. In this article, we will go through the most common cache synchronization strategies, their advantages, and disadvantages, and also popular use cases.

How to maintain Rest API backward compatibility?

All minor changes in Rest API should be backward compatible. A service that is exposing its interface to internal or/and external clients should always be backward compatible between major releases. A release of a new API version is a very rare thing. Usually, a release of a new API version means some global breaking changes with a solid refactoring or change of business logic, models, classes and requests. In most of the cases, changes are not so drastic and should still work for existing clients that haven't yet implemented a new contract. So how to ensure that a Rest API doesn't break backward compatibility?