ディスカッション (11件)
「Smol machines」は、コールドスタートがサブ秒(1秒以下)で完了する、極めて軽量でポータブルな仮想マシン環境です。開発スピードを極限まで高めたいエンジニアにとって、非常に魅力的な選択肢になるはずです。
Hello, I'm building a replacement for docker containers with a virtual machine with the ergonomics of containers + subsecond start times.
I worked in AWS previously in the container space + with firecracker. I realized the container is an unnecessary layer that slowed things down + firecracker was a technology designed for AWS org structure + usecase.
So I ended up building a hybrid taking the best of containers with the best of firecracker.
Let me know your thoughts, thanks!
Great job with the comparison table. Immediately I was like “neat sounds like firecracker” then saw your table to see where it was similar and different. Easy!
Nice job! This looks really cool
The feature that lets you create self-contained binaries seems like a potentially simpler way to package JVM apps than GraalVM Native.
Probably a lot of other neat usecases for this, too
smolvm pack create --image python:3.12-alpine -o ./python312
./python312 run -- python3 --version
# Python 3.12.x — isolated, no pyenv/venv/conda needed
smolvm is awesome. The team is highly responsive and very experienced. They clearly know what they’re doing.
I’m currently evaluating smolvm for my project, https://withcave.ai (https://withcave.ai), where I’m using Incus for isolation. The initial integration results look very promising!
Can .smolmachine be digitally signed and self authenticate when run? Similar to https://docs.sylabs.io/guides/main/user-guide/signNverify.ht... (https://docs.sylabs.io/guides/main/user-guide/signNverify.html)
Is there a relation to the similarly-purposed and similarly-named https://github.com/CelestoAI/SmolVM (https://github.com/CelestoAI/SmolVM)
Hey this is pretty neat! I definitely would try using this for benchmarks and other places where I need strong isolation as Docker is just too bloated and slow, but sadly I don't think I can run this natively on my Windows laptop. I hope you extend to WSL! Good luck and congrats on launch.
What I really like about containers is quickly being able to spin one up without having to specify resources (e.g. RAM limit). I hope this would let me do that also.
I see the alpine and python:3.12-alpine images in your cli docs. Where does these come from?is it from a docker like registry or are these built in? Can I create my own images? Or this this purely done with the smolfile? Is there a Ubuntu image available?
Looks really nice btw. Hot resize mem/cpu would be nice. This could become a nice tech for a one-backend-per-customer infra orchestrator then.
Any integration with existing orchestrators? Plans to support any or building your own?