Skip to content
vic

7mind/distage-example

Showcase for izumi distage, BIO, tagless final, http4s, doobie and zio

7mind/distage-example.json
{
"createdAt": "2019-02-16T20:51:59Z",
"defaultBranch": "develop",
"description": "Showcase for izumi distage, BIO, tagless final, http4s, doobie and zio",
"fullName": "7mind/distage-example",
"homepage": "https://github.com/7mind/izumi",
"language": "Scala",
"name": "distage-example",
"pushedAt": "2025-11-24T15:33:44Z",
"stargazersCount": 58,
"topics": [
"architecture",
"dependency-injection",
"distage",
"fp",
"http4s",
"izumi",
"patterns",
"scala",
"tagless-final",
"tutorial"
],
"updatedAt": "2025-11-24T15:33:47Z",
"url": "https://github.com/7mind/distage-example"
}

Build Status License

Example distage project.

Features distage from Izumi project for dependency injection, BIO typeclasses for bifunctor tagless final, distage-testkit for testing, ZIO Environment for composing test fixtures, and distage-framework-docker for setting up test containers.

There are three variants of the example project:

  • [bifunctor-tagless]!(bifunctor-tagless/src) – Main example. It’s written in bifunctor tagless final style with BIO typeclasses, uses ZIO as a runtime and ZIO Environment for composing test fixtures.
  • [monofunctor-tagless]!(monofunctor-tagless/src) – Written in monofunctor tagless final style with Cats Effect typeclasses, and can run using both Cats IO and ZIO runtimes.
  • [monomorphic-cats]!(monomorphic-cats/src) – A simpler example written without tagless final, uses [Cats IO]!() directly everywhere.

To launch tests that require postgres ensure you have a docker daemon running in the background.

Use sbt test to launch the tests.

You can launch the application with the following command.

# With docker daemon running
./launcher -u scene:managed :leaderboard
# Alternatively, with in-memory storage
./launcher -u repo:dummy :leaderboard

Afterwards you can call the HTTP methods:

Terminal window
curl -X POST http://localhost:8080/ladder/50753a00-5e2e-4a2f-94b0-e6721b0a3cc4/100
curl -X POST http://localhost:8080/profile/50753a00-5e2e-4a2f-94b0-e6721b0a3cc4 -d '{"name": "Kai", "description": "S C A L A"}'
# check leaderboard
curl -X GET http://localhost:8080/ladder
# user profile now shows the rank in the ladder along with profile data
curl -X GET http://localhost:8080/profile/50753a00-5e2e-4a2f-94b0-e6721b0a3cc4

If ./launcher command fails for you with some cryptic stack trace, there’s most likely an issue with your Docker. First of all, check that you have docker and contrainerd daemons running. If you’re using something else than Ubuntu, please stick to the relevant installation page:

sudo systemctl status docker
sudo systemctl status contrainerd

Both of them should have Active: active (running) status. If your problem isn’t gone yet, most likely you don’t have your user in docker group. Here you can find a tutorial on how to do so. Don’t forget to log out of your session or restart your virtual machine before proceeding. If you still have problems, don’t hesitate to open an issue.

Use sbt to build a native Linux binary with GraalVM NativeImage under Docker:

Terminal window
sbt bifunctor-tagless/GraalVMNativeImage/packageBin

If you want to build the app using local native-image executable (e.g. on a Mac), comment out the graalVMNativeImageGraalVersion key in build.sbt first.

To test the native app with dummy repositories run:

Terminal window
./bifunctor-tagless/target/graalvm-native-image/bifunctor-tagless -u scene:managed -u repo:dummy :leaderboard

To test the native app with production repositories in Docker run:

Terminal window
./bifunctor-tagless/target/graalvm-native-image/bifunctor-tagless -u scene:managed -u repo:prod :leaderboard

Notes:

  • Currently, the application builds with GraalVM 22.3. Check other GraalVM images here
  • JNA libraries are just regular Java resources, currently the NI config is generated for x86-64 Linux, you’ll have to re-generate or manually edit it to run on different operating systems or architectures.
  • The following bugs may still manifest, but it seems like they aren’t blockers anymore:
    1. https://github.com/oracle/graal/issues/4797
    2. https://github.com/oracle/graal/issues/4282
  • -Djna.debug_load=true key added to the native app command line might help to debug JNA-related issues

See Native Image docs for details.

Add the following to Java commandline to run the Assisted configuration agent:

-agentlib:native-image-agent=access-filter-file=./ni-filter.json,config-output-dir=./src/main/resources/META-INF/native-image/auto-wip

Notes:

  • The codepaths in docker-java are different for the cold state (when no containers are running) and the hot state. It seems like we’ve managed to build an exhaustive ruleset for docker-java so it’s excluded in ni-filter.json. If something is wrong and you need to generate the rules for docker-java, run the agent twice in both hot and cold state.
  • Only PluginConfig.const works reliably under Native Image. So, ClassGraph analysis is disabled in ni-filter.json. You can’t make dynamic plugin resolution working under Native Image.