CHRIS HANNAM
PROFILE
Staff-level Python engineer with 20+ years of experience designing and building distributed systems, data pipelines, and event-driven architectures. Currently combining architectural leadership with hands-on technical work at a telematics company – driving platform migrations, system design decisions, and an observability transformation while continuing to review code across the platform. Prior experience building regulated health-tech products (Babylon Health) and high-throughput monitoring infrastructure processing terabytes of time-series data in real time.
TECHNICAL SKILLS
Languages: Python (20+ years), SQL, Java
Systems & Architecture: Event-driven design, Kafka, microservices, real-time data pipelines, REST APIs
Frameworks & Tools: FastAPI, Django, Flask, Airflow, Docker, pytest, Behave
Data Stores: PostgreSQL, MongoDB, InfluxDB, OpenTSDB, ElasticSearch, Snowflake, AWS Redshift, Avro, Parquet
Cloud & Observability: AWS (ECS, S3, Batch, CloudWatch, Redshift), GCP, Datadog, Grafana, CI/CD (Jenkins, CircleCI, GitHub Actions)
EXPERIENCE
DRVN Solutions – Remote
Head of Engineering (promoted from Senior Python Engineer → Tech Lead) | September 2021 – Present
Telematics platform processing real-time vehicle telemetry via an event-driven Kafka architecture, serving location data, driving behaviour analysis, and fault detection to breakdown club partners. Team of 15 engineers across Python, Java, and Cloud. Throughout this leadership role I have remained deeply embedded in architecture, code review, and hands-on platform evolution.
- System design: Architected the migration from ElasticSearch to PostgreSQL as the primary data store, reducing operational complexity and eliminating a significant cost centre
- Observability transformation: Designing and leading the rollout of Datadog to replace a fragmented stack of custom Grafana dashboards, CloudWatch, and OpsGenie – establishing unified monitoring, alerting, and APM across all services
- Cost optimisation: Drove reliability and infrastructure optimisation work that reduced AWS spend by £40k/month (£480k annually)
- Technical direction: Made key platform technology decisions: FastAPI for new services, PostgreSQL as primary datastore, Kafka for event streaming. Continue to perform Python code reviews across the platform team
- Architecture: Event-driven Kafka-based pipeline ingesting telemetry from vehicle dongles and manufacturer APIs, with RESTful services for downstream consumers
World Remit – London
Senior Data Engineer | June 2019 – September 2021
Built and operated large-scale data lake and ETL pipelines for an international remittance platform. Managed the full data lifecycle from ingestion of remote sources through transformation to delivery via AWS Redshift.
- Led the migration to Snowflake, delivering a new analytics platform that became the company’s primary data warehouse
- Established code review practices, testing standards, and deployment pipelines for the data platform
- Hired and mentored junior engineers, building out the data engineering function
- Stack: Python, Docker, Airflow, SQL, AWS Redshift, Snowflake
Babylon Health – London
Senior Python Developer | August 2017 – June 2019
Developed AI-powered health products classified as medical devices under regulatory oversight, requiring rigorous testing, audit trails, and quality control processes.
- Built the Health Check backend: calculated probability of developing 20+ diseases based on chatbot questionnaire data, returning personalised lifestyle recommendations
- Implemented multi-language support via a translation service layer
- Worked on real-time diagnostic chatbot backend handling live user conversations
- Established BDD testing practices using Behave and pytest across the team
Server Density – London (Remote)
Technical Lead | January 2012 – August 2017
Led a remote engineering team building high-throughput monitoring infrastructure processing terabytes of time-series data in real time.
- Designed and deployed a microservices architecture using Kafka and Storm for horizontal scaling of the monitoring pipeline
- Built a time-series data storage solution on Google Cloud using async Python 3
- Created an Android monitoring application with push notifications (Java)
- Established CI/CD pipelines using Buildbot, Travis, and Puppet
EARLIER EXPERIENCE
- Isotoma – Python Developer (2009–2011). Django, Plone, and Twisted projects.
- IOKO – Java Developer (2007–2009). Project Kangaroo/SeeSaw VOD platform; Sky.com deployments.
- Siemens IT Solutions – Perl/Unix Engineer (2006–2007). Security review for bbc.co.uk; BBC iPlayer infrastructure deployment.
- INSL – Senior Perl Developer (2004–2006). Email security product SpheriQ.
EDUCATION
BSc Software Engineering (2:1) – University of Hull