The Commute Listening Project

The Project

A New Year, a New Routine

This year—more precisely starting from Wednesday, January 14th—I began a new job. The new position comes with a noticeably longer commute: about one hour each way, resulting in roughly two hours per day spent on the road.

Rather than letting this time slip by, I decided to turn it into an opportunity to broaden my musical knowledge. Listening to something different every day helps me avoid the monotony of highway driving while discovering new albums and unexpected gems. I’ve always been the kind of person who listens to a bit of everything, without a single favorite genre, which makes this project a natural fit.

My goal is simple: commit to listening to at least one—and at most two—new albums every day.

How This Project Works

This page is updated dynamically and displays a mosaic of all the albums I listen to, along with the corresponding listening date. I also added some buttons and dropdown menus to filter and display the albums in different ways—just for fun.

By clicking on an album, a popup will appear showing detailed information about the track. From there, you can view additional album details and optionally open its Spotify page. To exit the popup, simply click outside the panel.

To keep the workflow as simple and efficient as possible, once I finish listening to an album on Spotify I simply send its URL—from the Spotify app—to a Telegram bot. One of the key aims of this setup is to make the updating process effortless and fast, ensuring that the mosaic remains accurate without requiring manual intervention.

For more information on how the system is implemented, see the Technical Details section of this note.

The Music Mosaic


Year
Month
Week
Year
Month
Week

Technical Details

This project is built around a simple but robust idea: separating data ingestion from data visualization, using a lightweight, serverless architecture.

The system is composed of two independent pipelines:

  • a write pipeline, responsible for collecting and storing albums

  • a read pipeline, used by the frontend to display and explore the data

Both pipelines are powered by Cloudflare Workers and share the same database as a single source of truth.

flowchart LR TG[Telegram Bot] -->|Webhook| CW1[Worker
Write API] CW1 --> SL[Song.link API] CW1 --> DB[(Cloudflare D1)] FE[Frontend] -->|Fetch| CW2[Worker
Read API] CW2 --> DB

Write Pipeline (Album Ingestion)

The write pipeline is triggered by a Telegram bot. Every time I send a Spotify album link to the bot, the following steps occur:

flowchart TD TG[Telegram Bot] WH[Webhook Endpoint] CW[Cloudflare Worker] VAL[Message Validation] SL[Song.link API] DB[(Cloudflare D1 Database)] RESP[Bot Response] TG --> WH WH --> CW CW --> VAL VAL --> SL SL --> CW CW --> DB DB --> CW CW --> RESP

Step-by-step explanation:

  1. Telegram → Webhook Telegram sends the message payload to a webhook endpoint exposed by a Cloudflare Worker.

  2. Message validation The Worker validates the message, ensuring it contains a supported Spotify URL and extracting the relevant identifiers.

  3. song.link API The Spotify link is sent to the song.link API to normalize the URL and resolve consistent metadata across platforms. I would have preferred to use the official Spotify APIs directly, but as of January 24th, 2026, the ability to create new Spotify apps is still disabled indefinitely. Maybe one day I’ll build a version 2.0 of this project leveraging those APIs, but for now, this approach will stay as it is.
  4. Metadata extraction Album name, artist, cover image, Spotify URL and timestamp are extracted and prepared for storage.

  5. Database persistence (Cloudflare D1) The album is stored in a D1 SQLite database. Duplicate checks prevent the same album from being inserted multiple times.

  6. Bot feedback The Worker sends a response back to the Telegram bot (success, duplicate, or error).

This pipeline is optimized for reliability and minimal latency, with no persistent server infrastructure.

Read Pipeline (Frontend Visualization)

The read pipeline is completely decoupled from Telegram and is used exclusively by the frontend.

graph TD FE[Frontend Markdown Page] FETCH[Fetch API] CW[Cloudflare Worker] DB[(Cloudflare D1 Database)] JSON[JSON Response] UI[Client Side Rendering] FE --> FETCH --> CW CW --> DB --> CW CW --> JSON --> UI

How it works:

  • The frontend page fetches album data via a public API endpoint exposed by the Worker.

  • The Worker queries the D1 database and returns a JSON payload.

  • All sorting, filtering, grouping, pagination, and UI interactions are handled client-side using JavaScript.

This approach keeps the backend simple and stateless while allowing the frontend to remain highly interactive.

Summary

A personal project that turns daily commuting time into a music discovery routine. Each day, one or two new albums are logged automatically via Spotify, Telegram, and a serverless backend, creating an interactive, filterable mosaic of listening history.

What's New in This Version

- Default option is now divide by pages.

Notes mentioning this note

There are no notes linking to this note.


Here are all the notes on this site, along with their links, visualized as a graph.