Need for Docker image reducing
To provide convenient delivery and faster deployment of our tools, just like everybody else − we use Docker. This article describes our experience of using containers for distribution of our product Acra (database encryption suite) and focuses on the method we used to reduce the size of Docker images approximately by 62-64 times.
It’s not like we’ve made a revolutionary discovery, but as developers, we found it interesting to trace the steps from the moment of packaging a product into a container to trimming it down to a small Docker image. And that’s precisely what this article is about.
Docker image size challenge
The standard approach to creating Docker images left us with containers of considerable size. For instance, the container with AcraConnector v.0.75 amounted to roughly 965 Mb, while AcraServer of the same version was 991 Mb!
This means that during the deployment of a fully-functional 3-piece environment for Acra, (AcraServer, AcraConnector, PostgreSQL) we had to download a total 2100 Mb worth of containers, which is a lot, even in the age of fast broadband connections. Such an amount of data took a long time (approximately 11+ minutes on our test machine) to download and start. This could seem suspicious to our users, to say the least. We know we’d be suspicious of 2100 Mb of slow-moving strange code!
But this is what was actually happening during those 11 minutes:
$ time docker image pull cossacklabs/acra-server
Using default tag: latest
latest: Pulling from cossacklabs/acra-server
ce657b5f60b9: Pull complete
52befadefd24: Pull complete
3c0732d5313c: Pull complete
fee55c622298: Pull complete
85155ee2fbc2: Pull complete
c51febe84798: Pull complete
52609aaab90b: Pull complete
01b53386cf56: Pull complete
b9e20339fbb7: Pull complete
54f23e6169f4: Pull complete
d8ac303a3d63: Pull complete
058218cc830b: Pull complete
eda618658811: Pull complete
d37f8027d346: Pull complete
db1c731e6012: Pull complete
c49f3c36e93d: Pull complete
4de101584539: Pull complete
Digest: sha256:781d57c26ae80d2dc1547a1b4250734444da2446481c0ae494f3a0c38be53fa8
Status: Downloaded newer image for cossacklabs/acra-server:latest
real 11m15.954s
user 0m0.340s
sys 0m0.160s
And the end result was bulky:
$ docker image list | grep cossacklabs
cossacklabs/acra-server latest 01ca600bc81d 12 months ago 991MB
cossacklabs/acra-connector latest d851cab6e97b 12 months ago 965MB
Why are the Docker containers so big?
The very concept behind using a Docker container assumes using a gradual “layering” of a basic image onto the filesystem (the basic image is set in the FROM
field of a Dockerfile) and the results of running each step (RUN
, COPY
, and ADD
). Every new layer only stores the changes related to the previous layer. The official documentation for Docker explains this.
Such an approach has its advantages:
● It allows using caching mechanisms during the re-assembling of an image;
● It decreases the factual space taken up by the images on the hard drive through the use of layer recycling (reuse);
● It allows performing a rollback to any of the intermediate layers (this is useful for debugging).
Even though on the development and testing stages it ensures these advantages, in case of product distribution this preserves unnecessary data leads to a considerable overhead.
Another common and more convincing reason is that a Dockerfile often describes the whole process of installation from source. This is convenient, especially in the world of Open Source as it provides visibility and a possibility to install the product on any platform using Docker. You can even edit it, experimenting with the algorithm if there is a need. However:
● The basis for a Docker image is usually a universally popular OS distribution (vanilla or tweaked environments created specifically for certain development purposes);
● The commands apt-get update
and yum makecache fast
are usually the very first commands that are run;
● Compilers and development environments are installed;
● The additional auxiliary software is installed (software for working with repositories, web utilities, etc.).
Each of these steps increases the size of the final image. For instance, this is the what our Docker file for AcraServer looked like in the previous release:
FROM golang
RUN apt-get update && apt-get install -y libssl-dev
# install themis
RUN git clone https://github.com/cossacklabs/themis.git /themis
WORKDIR /themis
RUN make install && ldconfig
RUN rm -rf /themis
WORKDIR /go
ENV GOPATH /go
# build AcraServer
RUN go get github.com/cossacklabs/acra/...
RUN go build github.com/cossacklabs/acra/cmd/acra-server
RUN go build github.com/cossacklabs/acra/cmd/acra-addzone
RUN go build github.com/cossacklabs/acra/cmd/acra-poisonrecordmaker
RUN go build github.com/cossacklabs/acra/cmd/acra-rollback
RUN go build github.com/cossacklabs/acra/cmd/acra-keymaker
VOLUME ["/keys"]
ENTRYPOINT ["acra-server", "--db_host=postgresql_link", "-v", "--keys_dir=/keys"]
# acra server port
EXPOSE 9393
# acra http api port
EXPOSE 9090
As the result, we got containers around 1Gb in size for each component of our product. Sure, this all is necessary for compilation and installation, but for distribution we’d ideally need a container that only contains the elements that are absolutely necessary for the successful work of the product. Next, we’ll describe the methods we tried that resulted in bite-sized containers we can actually use.
Three ways to reduce docker image size
The Docker community is dynamically developing and evolving with the Docker technology itself. So we were not the first to think about solving the image bloating problem. But let’s take a look at the currently known ways of reducing the size in Docker images (and their effectiveness) first.
1. Deleting files after compilation
This approach turned out to be inefficient for us. It also created a lot of confusion because the files that are deleted on the last stage of the installation are still preserved in the previous layers. The act of cleaning up actually introduces more mess and increases the final image size.
To evaluate this approach, let’s create a test Docker file:
$ cat > ./test-rm-01.Dockerfile <<'EOF'
FROM debian:stretch
RUN fallocate -l 100M /test.file
RUN ls -al /test.file
RUN rm /test.file
RUN ls -al /test.file || true
ENTRYPOINT ["/bin/bash"]
EOF
And let’s compile an image:
$ docker build --tag test-rm-01 -f ./test-rm-01.Dockerfile .
Step 1/6 : FROM debian:stretch
---> da653cee0545
Step 2/6 : RUN fallocate -l 100M /test.file
---> Using cache
---> 3bb0f1b5cfb3
Step 3/6 : RUN ls -al /test.file
---> Using cache
---> 788b049245ef
Step 4/6 : RUN rm /test.file
---> Using cache
---> 9cea85663bbb
Step 5/6 : RUN ls -al /test.file || true
---> Running in 0c9b88282a92
ls: cannot access '/test.file': No such file or directory
Removing intermediate container 0c9b88282a92
---> 01fde2ee07c6
Step 6/6 : ENTRYPOINT ["/bin/bash"]
---> Running in dfe045ef01c6
Removing intermediate container dfe045ef01c6
---> d12506902cb1
Successfully built d12506902cb1
Successfully tagged test-rm-01:latest
Note how the file was deleted in the Step 4/6. Let’s now check the resulting container size:
$ docker images test-rm-01
REPOSITORY TAG IMAGE ID CREATED SIZE
test-rm-01 latest d12506902cb1 About a minute ago 205MB
Deleting that file didn’t reduce the size of the final container, meaning this method doesn’t really work for us.
2. Making the previous approach work
In Docker files, we can often encounter lines that look like this fragment from postgres/10/alpine/Dockerfile:
RUN set -ex \
\
&& apk add --no-cache --virtual .fetch-deps \
ca-certificates \
openssl \
tar \
\
&& wget -O postgresql.tar.bz2 "https://ftp.postgresql.org/pub/source/v$PG_VERSION/postgresql-$PG_VERSION.tar.bz2" \
&& echo "$PG_SHA256 *postgresql.tar.bz2" | sha256sum -c - \
&& mkdir -p /usr/src/postgresql \
&& tar \
--extract \
--file postgresql.tar.bz2 \
--directory /usr/src/postgresql \
--strip-components 1 \
&& rm postgresql.tar.bz2 \
\
&& apk add --no-cache --virtual .build-deps \
bison \
coreutils \
dpkg-dev dpkg \
flex \
gcc \
...
&& apk add --no-cache --virtual .postgresql-rundeps \
$runDeps \
bash \
su-exec \
# tzdata is optional, but only adds around 1Mb to image size and is recommended by Django documentation:
# https://docs.djangoproject.com/en/1.10/ref/databases/#optimizing-postgresql-s-configuration
tzdata \
&& apk del .fetch-deps .build-deps \
&& cd / \
&& rm -rf \
/usr/src/postgresql \
/usr/local/share/doc \
/usr/local/share/man \
&& find /usr/local -name '*.a' -delete
Please pay attention − a single RUN
command describes almost the whole process: preparation of the necessary software
→compilation of the software
→cleaning up
. The goal here is to reduce the final container size.
Now, let’s check the result using our example. Creating a test Dockerfile first:
$ cat > ./test-rm-02.Dockerfile <<'EOF'
FROM debian:stretch
RUN fallocate -l 100M /test.file \
&& ls -al /test.file \
&& rm /test.file \
&& ls -al /test.file || true
ENTRYPOINT ["/bin/bash"]
EOF
Let’s create an image:
$ docker build --tag test-rm-02 -f ./test-rm-02.Dockerfile .
Step 1/3 : FROM debian:stretch
---> da653cee0545
Step 2/3 : RUN fallocate -l 100M /test.file && ls -al /test.file && rm /test.file && ls -al /test.file || true
---> Running in 3901f3cead2e
-rw-r--r-- 1 root root 104857600 Mar 23 10:03 /test.file
ls: cannot access '/test.file': No such file or directory
Removing intermediate container 3901f3cead2e
---> e3d04603fb45
Step 3/3 : ENTRYPOINT ["/bin/bash"]
---> Running in 96cb42e9c368
Removing intermediate container 96cb42e9c368
---> 328687619d98
Successfully built 328687619d98
Successfully tagged test-rm-02:latest
Checking the final size of the resulting container:
$ docker images test-rm-02
REPOSITORY TAG IMAGE ID CREATED SIZE
test-rm-02 latest 328687619d98 About a minute ago 100MB
As we remember, the layer stores the _final result of the whole step_. So it’s not surprising that the size of the image remains the same − the Step 2/3 didn’t it, after all.
This optimisation method can be considered as an effective one on the condition that the unnecessary data is deleted on the same step where it is created. Otherwise, deletion of unnecessary data on the following steps will have no effect, just as it happens in the first method. Unfortunately, quite often it is recommended to “group commands into a single step”, without implementing this important data clearing/size reducing condition.
3. Squashing images
Based on the layered structure of the container file system, the method of creating one single resultative layer without the intermediate excess data is quite an intuitive one. For instance, this can be done using the docker-squash
utility. The process of using it and the results are well documented on the Docker-squash GitHub page.
At its core, this method yields the results similar to the previous method. However, it grants more freedom in the creation of a Dockerfile. It’s not necessary to overly control the excess data on each step of the image creation.
The downside of this approach, in our opinion, is the need to use 3rd party components during the compilation. This hinders the placement of images in open repositories because the user who wishes to compile the image (on their machine), will need to install additional (3rd-party) software.
4. Building a container using a scratch
image
All the methods described above provide a very limited effect as they only aim at eliminating the excess data that’s accumulating while the container is being built. But most of the bulk of the resulting container is taken up by the underlying OS (and OS-based environments). On the one hand, the uniformity of the standard OS images considerably saves time, but they always include excess components.
Often it makes no sense to spend time on creation and support of a custom “lightweight” OS version that only carries the components necessary to compile the product. Which is why we consider this way of reducing the Docker image size to be impractical and usually use full-sized official OS images as the basis for the compilation of our products.
However, we can create a final container using a standard empty Docker image scratch
by placing a pre-compiled product with all the necessary libraries into it.
The general scheme for building the final container using scratch
is well documented in the official Docker docs. And yes, this works. But only in an isolated case of static linking in a product written in Go.
Considerably expanding its functionality, starting with the version 17.05, Docker added support for multi-stage builds. The main idea behind this is the use of intermediate images as a platform for compilation with the further creation of the final image and copying the results from the previous step into it. Using this technique, the compilation of our AcraServer into a Docker image now could look like this:
FROM debian:stretch as build
ARG VERSION
ARG VCS_REF
ARG VCS_BRANCH
ARG BUILD_DATE
RUN apt-get update && apt-get -y install \
apt-transport-https \
build-essential \
ca-certificates \
curl \
git \
gnupg \
libssl-dev \
openssl \
rsync \
wget
WORKDIR /root
RUN ["/bin/bash", "-c", \
"set -o pipefail && \
curl -sSL https://pkgs.cossacklabs.com/scripts/libthemis_install.sh | \
bash -s -- --yes --method source --branch $VCS_BRANCH \
--without-packing --without-clean"]
RUN GO_SRC_FILE="go1.9.3.linux-amd64.tar.gz" && \
wget --no-verbose --no-check-certificate \
"https://storage.googleapis.com/golang/${GO_SRC_FILE}" && \
tar xf "./${GO_SRC_FILE}"
RUN git clone -b $VCS_BRANCH https://github.com/cossacklabs/acra
ENV GOROOT="/root/go" GOPATH="/root/gopath"
ENV PATH="$GOROOT/bin/:$PATH"
ENV GOPATH_ACRA="${GOPATH}/src/github.com/cossacklabs/acra"
RUN mkdir -p "${GOPATH}/src/github.com/cossacklabs/acra" && \
rsync -au acra/* "${GOPATH_ACRA}/"
RUN mkdir -p "${GOPATH}/src/github.com/cossacklabs/themis/gothemis" && \
rsync -au themis/gothemis/ \
"${GOPATH}/src/github.com/cossacklabs/themis/gothemis"
RUN go get -d -t -v -x github.com/cossacklabs/acra/...
RUN go get -v -x github.com/cossacklabs/acra/...
COPY collect_dependencies.sh .
RUN chmod +x ./collect_dependencies.sh
RUN for component in server connector; do \
./collect_dependencies.sh \
"${GOPATH}/bin/acra-${component}" "/container.acra-${component}" && \
cp "${GOPATH}/bin/acra-${component}" "/container.acra-${component}/"; \
done
FROM scratch
ARG VERSION
ARG VCS_URL
ARG VCS_REF
ARG VCS_BRANCH
ARG BUILD_DATE
COPY --from=acra-build /container.acra-server/ /
VOLUME ["/keys"]
EXPOSE 9090 9393
ENTRYPOINT ["/acra-server"]
CMD ["--db_host=postgresql_link", "-v", "--keys_dir=/keys"]
Here we move in 2 stages. At first, we:
- Prepare the environment;
- Install the dependencies;
- Carry out the compilation.
Next, we:
- Prepare the final image;
- Copy the pre-compiled product and libraries into the final image.
Unfortunately, since it is impossible to set a branch and a tag for the go get
command, we need to use git clone
→ rsync
→ go get
to compile the necessary version of the product. We’ll explain the use of the collect_dependencies.sh
script below. But it is important to note the essentially two-stage compilation described in a single Dockerfile
.
Reducing docker image size for Acra containers
Before we proceed, it is worth noting that we don’t always use Go everywhere or get the end product as a static pre-compiled binary file. And sometimes we want to put additional utilities into the container.
For a number of reasons (i.e. differences between different Go versions in CGO_ENABLED=0
mode with static linking of external libraries that sometimes make it impossible to actually use static linking), we decided to use the standard compilation method.
Often we can see images of popular products that are hundreds of megabytes in size as they also preserve the whole original underlying OS distributive. In our opinion, it’s unwise to ignore the amount of time it takes a potential user to download the product and start using it. And since we strive to make security products engineer-friendly, going an extra mile to significantly decrease download time made sense to us. Besides, our ultimate goal was to find a solution which would meet the following criteria:
● Integration into the fully automated CI/CD process;
● Storing all of the code in an open repository;
● Reproducibility on the user platform with minimal effort.
Obviously, if we try to copy a pre-compiled binary into the container without adding the dependencies, we won’t be able to launch it. We’d see a (not very verbose) “File not found” warning indicating that a necessary missing library couldn’t be located (not the actual executable binary file in the container).
We can get the list of the external dependencies using a standard command ldd
:
$ ldd "${GOPATH}/bin/acra-server"
linux-vdso.so.1 (0x00007ffcb7191000)
libthemis.so => /usr/lib/libthemis.so (0x00007f54a247c000)
libsoter.so => /usr/lib/libsoter.so (0x00007f54a225a000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f54a203d000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f54a1c9e000)
libcrypto.so.1.1 => /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1 (0x00007f54a180b000)
/lib64/ld-linux-x86-64.so.2 (0x00007f54a268b000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f54a1607000)
In addition to the list of dependencies we’ve received (which will be next copied to the resulting container), we shouldn’t forget about the need to copy the library ld-linux
(necessary for the correct work of the compiled program with dynamic libraries) − so let’s define its location and name:
$ readelf -a "${GOPATH}/bin/acra-server" | grep interpreter | grep -Po "(?<=preter:\\s).+(?=\\])"
/lib64/ld-linux-x86-64.so.2
All these approaches need to be tied together. We created a short script (mentioned above) to put the pre-prepared and ready to copy directories with all the dependencies into the final image
collect_dependencies.sh
:
#!/bin/bash
set -euo pipefail
FILE_ELF="$1"
DIR_CONTAINER="$2"
mkdir "$DIR_CONTAINER"
mapfile -t libs < <(ldd "$FILE_ELF" | grep '=>' | awk '{print $3}')
libs+=($(readelf -l "$FILE_ELF" | grep -Po "(?<=preter:\\s).+(?=\\])"))
for l in "${libs[@]}"; do
mkdir -p "${DIR_CONTAINER}/$(dirname ${l})"
cp -L "$l" "${DIR_CONTAINER}/${l}"
done
It accepts a binary file for analysis and the path using which the final structure will be formed as a parameter. You can see that the steps described above are implemented in it.
As a result, we needed to create 2 containers for different components of our product using the same container for compilation. In the end, we decided to break the description of this process into 3 Dockerfiles, whilst not abandoning the very idea of multi-stage building.
As a bonus and for the sake of convenience of work with containers, we decided to supply metadata for them, using the recommendations from the Docker developers as a basis. We also used a steadily evolving label-schema
standard.
So let’s take a look at the variant we got after all these manipulations. acra-build.dockerfile
results in:
FROM debian:stretch
ARG VERSION
ARG VCS_REF
ARG VCS_BRANCH
ARG BUILD_DATE
LABEL com.cossacklabs.product.name="acra" \
com.cossacklabs.product.version="$VERSION" \
com.cossacklabs.product.vcs-ref="$VCS_REF" \
com.cossacklabs.product.vcs-branch="$VCS_BRANCH" \
com.cossacklabs.product.component="acra-server" \
com.cossacklabs.docker.container.build-date="$BUILD_DATE" \
com.cossacklabs.docker.container.type="build"
RUN apt-get update && apt-get -y install \
apt-transport-https \
build-essential \
ca-certificates \
curl \
git \
gnupg \
libssl-dev \
openssl \
rsync \
wget
WORKDIR /root
RUN ["/bin/bash", "-c", \
"set -o pipefail && \
curl -sSL https://pkgs.cossacklabs.com/scripts/libthemis_install.sh | \
bash -s -- --yes --method source --branch $VCS_BRANCH \
--without-packing --without-clean"]
RUN GO_SRC_FILE="go1.9.3.linux-amd64.tar.gz" && \
wget --no-verbose --no-check-certificate \
"https://storage.googleapis.com/golang/${GO_SRC_FILE}" && \
tar xf "./${GO_SRC_FILE}"
RUN git clone -b $VCS_BRANCH https://github.com/cossacklabs/acra
ENV GOROOT="/root/go" GOPATH="/root/gopath"
ENV PATH="$GOROOT/bin/:$PATH"
ENV GOPATH_ACRA="${GOPATH}/src/github.com/cossacklabs/acra"
RUN mkdir -p "${GOPATH}/src/github.com/cossacklabs/acra" && \
rsync -au acra/* "${GOPATH_ACRA}/"
RUN mkdir -p "${GOPATH}/src/github.com/cossacklabs/themis/gothemis" && \
rsync -au themis/gothemis/ \
"${GOPATH}/src/github.com/cossacklabs/themis/gothemis"
RUN go get -d -t -v -x github.com/cossacklabs/acra/...
RUN go get -v -x github.com/cossacklabs/acra/...
COPY collect_dependencies.sh .
RUN chmod +x ./collect_dependencies.sh
RUN for component in server connector; do \
./collect_dependencies.sh \
"${GOPATH}/bin/acra-${component}" "/container.acra-${component}" && \
cp "${GOPATH}/bin/acra-${component}" "/container.acra-${component}/"; \
done
acra-server.dockerfile
:
ARG VCS_REF
FROM cossacklabs/acra-build:${VCS_REF} as acra-build
FROM scratch
ARG VERSION
ARG VCS_URL
ARG VCS_REF
ARG VCS_BRANCH
ARG BUILD_DATE
LABEL org.label-schema.schema-version="1.0" \
org.label-schema.vendor="Cossack Labs" \
org.label-schema.url="https://www.cossacklabs.com" \
org.label-schema.name="AcraServer" \
org.label-schema.description="Acra helps you easily secure your databases in distributed, microservice-rich environments" \
org.label-schema.version=$VERSION \
org.label-schema.vcs-url=$VCS_URL \
org.label-schema.vcs-ref=$VCS_REF \
org.label-schema.build-date=$BUILD_DATE \
com.cossacklabs.product.name="acra" \
com.cossacklabs.product.version=$VERSION \
com.cossacklabs.product.vcs-ref=$VCS_REF \
com.cossacklabs.product.vcs-branch=$VCS_BRANCH \
com.cossacklabs.product.component="acra-server" \
com.cossacklabs.docker.container.build-date=$BUILD_DATE \
com.cossacklabs.docker.container.type="product"
COPY --from=acra-build /container.acra-server/ /
VOLUME ["/keys"]
EXPOSE 9090 9393
ENTRYPOINT ["/acra-server"]
CMD ["--db_host=postgresql_link", "-v", "--keys_dir=/keys"]
The file acra-connector.dockerfile
is similar to the previous one, with the exception of the title and the initial copying path:
ARG VCS_REF
FROM cossacklabs/acra-build:${VCS_REF} as acra-build
FROM scratch
ARG VERSION
ARG VCS_URL
ARG VCS_REF
ARG VCS_BRANCH
ARG BUILD_DATE
LABEL org.label-schema.schema-version="1.0" \
org.label-schema.vendor="Cossack Labs" \
org.label-schema.url="https://www.cossacklabs.com" \
org.label-schema.name="AcraConnector" \
org.label-schema.description="Acra helps you easily secure your databases in distributed, microservice-rich environments" \
org.label-schema.version=$VERSION \
org.label-schema.vcs-url=$VCS_URL \
org.label-schema.vcs-ref=$VCS_REF \
org.label-schema.build-date=$BUILD_DATE \
com.cossacklabs.product.name="acra" \
com.cossacklabs.product.version=$VERSION \
com.cossacklabs.product.vcs-ref=$VCS_REF \
com.cossacklabs.product.vcs-branch=$VCS_BRANCH \
com.cossacklabs.product.component="acra-conector" \
com.cossacklabs.docker.container.build-date=$BUILD_DATE \
com.cossacklabs.docker.container.type="product"
COPY --from=acra-build /container.acra-connector/ /
VOLUME ["/keys"]
EXPOSE 9191 9494
ENTRYPOINT ["/acra-connector"]
CMD ["--acraserver_connection_host=acra-server_link", "-v", "--keys_dir=/keys"]
To simplify the process of recreating the compilation of these images, we added the docker
target to our Makefile:
Makefile
...
define docker_build
@docker image build \
--no-cache=true \
--build-arg VERSION=$(VERSION)\
--build-arg VCS_URL="https://github.com/cossacklabs/acra" \
--build-arg VCS_REF=$(GIT_HASH) \
--build-arg VCS_BRANCH=$(BRANCH) \
--build-arg BUILD_DATE=$(BUILD_DATE) \
--tag cossacklabs/$(1):$(GIT_HASH) \
-f ./docker/$(1).dockerfile \
./docker
for tag in $(2); do \
docker tag cossacklabs/$(1):$(GIT_HASH) cossacklabs/$(1):$$tag; \
done
endef
ifeq ($(BRANCH),stable)
CONTAINER_TAGS = stable latest $(VERSION)
else ifeq ($(BRANCH),master)
CONTAINER_TAGS = master current $(VERSION)
endif
docker:
$(call docker_build,acra-build,)
$(call docker_build,acra-server,$(CONTAINER_TAGS))
$(call docker_build,acra-connector,$(CONTAINER_TAGS))
@docker image rm cossacklabs/acra-build:$(GIT_HASH)
The final result with Docker images of considerably reduced size:
$ docker image ls --filter label=com.cossacklabs.product.name="acra"
REPOSITORY TAG IMAGE ID CREATED SIZE
cossacklabs/acra-authmanager master 6958a9c86ff7 3 weeks ago 8.04MB
cossacklabs/acra-webconfig master d4a9e14837d2 3 weeks ago 14.4MB
cossacklabs/acra-keymaker master f03a8caa85b7 3 weeks ago 8.4MB
cossacklabs/acra-connector master a338ea3646bc 3 weeks ago 10.8MB
cossacklabs/acra-server master 2ff05c34a075 3 weeks ago 16.3MB
Please pay attention – we provided an example with tags for more convenient filtration.
Summary
Docker is still a relatively young technology. Even though many of us have dealt with LXC containers, the Docker community still has a long way to go until convenient and efficient Docker images become a norm.
Through implementing the technique described above, we managed to reduce the Docker images size for the elements of Acra by approximately 62-64 times. At the same time, we managed to keep a number of important code characteristics in the process:
● Preserved the code readability;
● Kept the code in an open repository alongside the product;
● Made the process fully automatic, without using 3rd party tools;
● Made the process of compiling a container easily replicable by a user;
● Added metadata to the resulting images.
Research and experiments like this serve the purpose of making our tools more user-friendly in non-obvious ways. It’s important to not just produce and provide a Docker image for those who’d like to try a product, but also to make it convenient to use (all the while taking a bit of load off the world’s communication systems:)).
You can find the new lightweight Docker images for Acra and try the Acra with Docker using the links and instructions on the Documentation Server.
Try Acra Engineering DemoDownload Acra Product Sheet
P.S. If you're looking for new ideas, this is the right place. If you're looking to implement security, apply for our Customer Success Program. If you're looking for ready-made security solutions, consider looking into Themis, Acra, or Hermes.