A user-friendly, customer-first dashboard for exploring San Francisco 311 service request patterns, built with Flask and React + Vite.
- Python 3.10+, Node.js 18+
- Place the
interview-credentials.jsonfile in the project root folder
# Make sure interview-credentials.json is in the project root before you start
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
python app.py # → http://localhost:5000cd frontend
npm install
npm run dev # → http://localhost:5173Once both are running, visit http://localhost:5173 to begin!
- Frontend:
react,recharts,tailwindcss,vite(all vianpm install) - Backend:
flask-cors,python-dateutil(already inrequirements.txt) - GeoJSON:
sf-districts.geojsoninfrontend/public/for the district choropleth map - Leaflet: loaded from CDN at runtime, no install needed
GET /api/districts?start_date=2023-01-01&end_date=2023-12-31
Ranks all 11 of SF's supervisor districts by request volume, resolution rate, median response time, and each district's distinguishing complaint category. I excluded "Street and Sidewalk Cleaning" from the distinguishing category calcs since it dominates every district by a huge margin. That data is still visible in my Complaint Composition chart in the Overview tab.
I chose this because the raw 311 data doesn't give you an easy way to answer "which district handles complaints best?" This endpoint computes that comparison in a single query using CTEs and APPROX_QUANTILES, and the frontend pairs it with a choropleth map and neighborhood drill-down so you can go from citywide patterns → district → neighborhood in a few clicks.
Claude: used throughout for one-shot prompting. Each prompt included specific constraints around security (parameterized queries, no string interpolation because we'd hate to see SQL injection in 2026), DRY principles, and architectural decisions. I'd review, test, and refactor every response before fully integrating and moving on to the next feature.
Google Stitch: used for UI inspiration, mockups, prototyping, and design direction.
- BigQuery SQL: CTEs, window functions, prior-period comparison logic
- Leaflet map: dynamic loading, GeoJSON styling, choropleth colors
- Export modal: preview table and format picker
- Shared utilities: threshold computation, formatting helpers
- This README (but I ended up rewriting half of it in the end)
I made sure all code written by AI was checked line by line and modified to fit the project. Safety, DRYness, and speed optimizations took priority!
Optimization examples: caching repeated queries so they don't re-hit BigQuery, only loading data when the user actually needs it, and combining multiple calculations into a single database call instead of three separate ones
- District comparison mode: let users select 2–3 districts (maybe w/ a chip bar) and overlay their timelines to reveal insights like "District 6 and 10 both had graffiti spikes in the springtime, but D6 resolved them 3x faster"
- Tests: pytest for endpoints, React Testing Library for components
- Better date formatting: the export modal shows raw
2023-01-01 → 2026-01-01instead ofJan 1, 2023 → Jan 1, 2026 - Use a districts API instead of shipping the GeoJSON file statically, just so district boundaries stay up-to-date and the repo stays lightweight
- Remove the Farallon Islands from the map bounds. I doubt any 311 data is coming from forbidden shark-infested waters
- URL-synced filters so views are shareable, bookmarkable, and persist on refresh
- Accessibility — ARIA labels, focus management in modals, keyboard nav for the GeoJSON map