Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP]filesystem benchmark #101

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from
Draft

[WIP]filesystem benchmark #101

wants to merge 1 commit into from

Conversation

sequix
Copy link

@sequix sequix commented May 19, 2020

I am trying to write a filesystem test suite for this project. Basically, it uses fio to generate fake I/O operations, then stargz-snapshotter range requests a local registry through eth0 with a limited bandwidth. Meanwhile, metrics in /proc will scraped and processed afterward using prometheus and gnuplot, to generate image about the stargz-snapshotter process like:

image

fio will record bandwidth, iops and latency also. These will be painted with gnuplot too:

image

The two pictures above is made from a fio test within a stargz image, which started 4 threads to read a same file until up to 512MiB.

Copy link
Member

@ktock ktock left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great! Thanks for this.

Can we measure it towards dockerhub? However, the main concern is we will end up to make many HTTP requests to the registry...
And maybe we can include comparison with other filesystems.

cc: @AkihiroSuda

script/fs-bench/image/run.sh Outdated Show resolved Hide resolved
cmd/containerd-stargz-grpc/main.go Outdated Show resolved Hide resolved
script/fs-bench/fio/config/randread-4.conf Outdated Show resolved Hide resolved
script/fs-bench/fs-bench/src/hello.py Outdated Show resolved Hide resolved
script/fs-bench/image/Dockerfile Outdated Show resolved Hide resolved
@sequix
Copy link
Author

sequix commented May 22, 2020

Great! Thanks for this.

Can we measure it towards dockerhub? However, the main concern is we will end up to make many HTTP requests to the registry...
And maybe we can include comparison with other filesystems.

cc: @AkihiroSuda

Yes, dockerhub is a much more general case, I'll make it to dockerhub.

@sequix
Copy link
Author

sequix commented May 25, 2020

Based on this test, I found something interesting. My test environment:

Kernel: 3.10.0-1062.18.1.el7.x86_64
Cores: 2
Mem: 8GiB
Hard disk bandwidth: 20MiB/s
Network bandwidth: 10MiB/s
Container system: debian 10 (buster)
Host system: centos 7
OCI image: docker.io/sequix/fio:legacy_256m_4t
stargz image: docker.io/sequix/fio:stargz_256m_4t
estargz image: docker.io/sequix/fio:estargz_256m_4t

I use fio to generate fake random read requests (pread(), to be precise). fio will launch 4 threads and each will pread a 4K block repeatedly until it consumed up 256MiB (1024MiB for all 4 threads).

For contrast, let's start with OCI image:
image
It took 50s to finish the test, 1024MiB / 50s = 20.48 MiB/s, sounds reasonable.

Now, stargz image:
image
850s to finish, 1024MiB / 850s = 1233 KiB/s.

Well, since stargz has to request DockerHub and decompress gzip, so maybe estargz will improve, with its memory cache prepared before actual preads, But

image
It took even longer, 1024 MiB / 900s = 1165 KiB/s.

And stargz-snapshotter used up a core to deal pread request in both stargz and estargz scenario (only paste estargz's process metrics here, because stargz's is very similar).
image

You can see from above, memory cache is ready at 120s around, but it still took pretty much time to finish the test. Maybe my test images are wrongly made. Or is the cache to blame?

Copy link
Member

@ktock ktock left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting. We need further investigation for finding the bottleneck but definitely we must improve the performance. I'll take a deeper look at it this week.

script/fs-bench/work/Dockerfile Outdated Show resolved Hide resolved
script/fs-bench/work/reset.sh Outdated Show resolved Hide resolved
script/fs-bench/test.sh Outdated Show resolved Hide resolved
Comment on lines 23 to 25
IMAGE_LEGACY="${IMAGE_LEGACY:-docker.io/sequix/fio:legacy_256m_4t}"
IMAGE_STARGZ="${IMAGE_STARGZ:-docker.io/sequix/fio:stargz_256m_4t}"
IMAGE_ESTARGZ="${IMAGE_ESTARGZ:-docker.io/sequix/fio:estargz_256m_4t}"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should use docker.io/stargz organization here. I'll push these images to docker.io/stargz later.

script/fs-bench/fio/Dockerfile Outdated Show resolved Hide resolved
script/fs-bench/fio/Dockerfile Show resolved Hide resolved
script/fs-bench/tools/go.mod Outdated Show resolved Hide resolved
@sequix
Copy link
Author

sequix commented May 28, 2020

rebaseed and signed off.

@ktock
Copy link
Member

ktock commented Jun 1, 2020

@sequix After a deeper investigation last week, it turned out that the bad read performance (#101 (comment)) on the filesystem didn't come from your benchmark method but did come from some bugs in the filesystem.
I fixed them on #105. Can you measure it again after that PR get merged?

Thanks a lot for your testing!

And can you add Apache 2.0 license headers for the following files? They are needed to pass CI tests. Please refer to other existing files.

- script/fs-bench/fio/Dockerfile
- script/fs-bench/work/tools/plot/fio.sh
- script/fs-bench/work/tools/process/main.go
- script/fs-bench/work/tools/scrape/main.go

I uploaded benchmarking images on https://hub.docker.com/r/stargz/fio

script/fs-bench/work/tools/scrape/main.go Outdated Show resolved Hide resolved
script/fs-bench/work/tools/process/main.go Outdated Show resolved Hide resolved
@sequix
Copy link
Author

sequix commented Jun 2, 2020

How can I check the golint error log? GitHub action did not provide much info to help me pass the CI.

@sequix sequix force-pushed the fs-bench branch 4 times, most recently from cba73c9 to b775cd7 Compare June 2, 2020 03:32
@ktock
Copy link
Member

ktock commented Jun 2, 2020

How can I check the golint error log? GitHub action did not provide much info to help me pass the CI.

Golint output is supposed to be logged to Github Actions. But in terms of header checks, we are currently logging just a list of files that haven't valid headers so we might need more verbose or friendly logging for this (but currently the list is enough as long as we know it indicates "these files have no valid headers").
We are using github.com/kunalkushwaha/ltag so https://github.com/kunalkushwaha/ltag/tree/master/template should help know the valid header templates.

@sequix
Copy link
Author

sequix commented Jun 2, 2020

Seems the CRIValidation failed in #105 too...

@ktock
Copy link
Member

ktock commented Jun 3, 2020

Recent test flaky seems to be because of recent updates of one of the images (nginx) used in CRI validation test. I'm working on fixing this (please see also kubernetes-sigs/cri-tools#618 ) and sorry for blocking this PR.

@ktock
Copy link
Member

ktock commented Jun 4, 2020

Fixed CI flaky(#106) and am done the read performance improvement(#105). Can you rebase?

@sequix
Copy link
Author

sequix commented Jun 4, 2020

rebased

To make a contrast, this benchmark will test all three types of image,
OCI, stargz and estargz in following steps.

1.Use fio to generate fake pread() requests parallely (4 threads).
2.Scrape metrics in /proc/<pid of stargz-snapshotter>.
3.Calculate scraped metrics with PromQL.
4.Draw fio bandwidth-latency and process-metrics images with gunplot.

Signed-off-by: Chuang Zhang <[email protected]>
Copy link
Member

@ktock ktock left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the fix on the scraping codes. Smoke tested and found some points to fix.
I'll take a deeper look at the codes related to processing data this week. I'm also not sure that do we need rate() function for our use-case. process/main.go will be much simpler if we don't use PromQL.

Comment on lines +33 to +43
RUN git clone https://github.com/opencontainers/runc \
$GOPATH/src/github.com/opencontainers/runc && \
cd $GOPATH/src/github.com/opencontainers/runc && \
git checkout d736ef14f0288d6993a1845745d6756cfc9ddd5a && \
GO111MODULE=off make -j2 BUILDTAGS='seccomp apparmor' && \
GO111MODULE=off make install && \
git clone https://github.com/containerd/containerd \
$GOPATH/src/github.com/containerd/containerd && \
cd $GOPATH/src/github.com/containerd/containerd && \
git checkout 990076b731ec9446437972b41176a6b0f3b7bcbf && \
GO111MODULE=off make -j2 && GO111MODULE=off make install
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Recently we introduced a common base image among test codes for easier version management of containerd & runc in this project. Can we use the base image for this? Can use snapshotter-base build target which includes runc + containerd + containerd-stargz-grpc(built from the repo) so we won't need to build the snapshotter binary inside the testing container during runtime.
Please also see the script in the integration test.

printf "\n"
else
printf "unset key\n"
printf 'plot "%s" w l lw 2\n' "$LOGS_BW"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

$LOGS_BW => $LOGS?

Comment on lines +19 to +25
# Set environemnt variable if you want use a customize fio image,
# whose entrypoint must be the start of a fio test, and output all its logs
# (stdio (in file `stdio`), bw_log, iops_log, and lat_log) to /output.
# 256m_4t stands for 4 threads and each read 256MiB (1024MiB in total).
IMAGE_LEGACY="${IMAGE_LEGACY:-docker.io/stargz/fio:legacy_256m_4t}"
IMAGE_STARGZ="${IMAGE_STARGZ:-docker.io/stargz/fio:stargz_256m_4t}"
IMAGE_ESTARGZ="${IMAGE_ESTARGZ:-docker.io/stargz/fio:estargz_256m_4t}"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you fix the naming convention of fio images to be fio:256m-4t-{org,sgz,esgz}(stands for 4 threads and each read 256MiB)?
Please see also: https://hub.docker.com/r/stargz/fio/tags

return
}

ts := " " + strconv.FormatInt(now.Unix()*1000+int64(now.Nanosecond())/1e6, 10) + " "
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why don't we do now.UnixNano() / int64(time.Millisecond)?

kill_all "containerd"
kill_all "containerd-stargz-grpc"
kill_all "scrape"
if [ "$NOCLEANUP" == "-nocleanup" ]; then
Copy link
Member

@ktock ktock Jun 25, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unbound variable when no option is passed. It should be ${NOCLEANUP:-}.

else
cleanup
fi
if [ "${NOSNAPSHOTTER}" == "-nosnapshotter" ] ; then
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as the above. Should be ${NOSNAPSHOTTER:-}.

@AkihiroSuda AkihiroSuda marked this pull request as draft August 26, 2021 01:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants