We plan to move Snowplow towards being “self-hosting” by sending Snowplow events from within our own apps for monitoring purposes; the idea is that you should be able to monitor the health of one deployment of Snowplow by using a second instance. We will start “eating our own dog food” in upcoming Snowplow Kinesis releases, where the Elasticsearch Sink and Kinesis S3 Sink (now in its own repo) will both emit
write_failed events using this new Scala event tracker.
The library is built around Akka 2.3.5; events are sent to a Snowplow collector using spray-client, and both synchronous and asynchronous event emitters are supported.
The Snowplow Scala Tracker is cross-published for Scala 2.10.x and Scala 2.11.x, and hosted in the Snowplow Maven repository. Assuming you are using SBT, you can add the tracker to your project’s
build.sbt like so:
For more detailed setup instructions, check out the Scala Tracker Setup Guide on the Snowplow wiki.
You’re now ready to start using the Tracker!
You will require these imports:
Create a Tracker instance like this:
We will now send an unstructured event with a custom context attached. We can create the JSONs for the event using the json4s DSL:
Please check out the Scala Tracker Technical Documentation on the Snowplow wiki for the tracker’s full API.
The Scala Tracker is of course very young, with a much narrower featureset than some of our other trackers - so we look forward to community feedback on what new features to prioritize. Feel free to get in touch or raise an issue on GitHub!