<link id='css--app'rel="stylesheet" href="/dist/css/app.min.css"> Opportunities – Trilogy


Senior/Principal Software Engineer - Backend and Data Infrastructure at Falkon AI
Seattle, WA, US

At Falkon, we are building a new system of intelligence that empowers professionals to define, understand and improve metrics that really matter.

We are product, engineering and research veterans from Microsoft, Amazon, Dropbox, Amperity and Zulily. Having lived through years of bad metrics meetings and fire-drills, we've discovered a revolutionary way to combine machine learning and human intuition to empower professionals to define, understand and improve metrics that really matter.

We are looking for a results-oriented backend engineer who can design, develop and scale the internal systems that power Falkon's core product from scratch. This includes building programmable data ingestion and processing pipelines that are self-serve for every tenant, building a horizontally scalable SQL-compatible data store that decouples compute and storage yet provides fast interactive query capability, and integrating frameworks like Apache Spark so we can distribute our analytical workloads across hundreds of machines. If you've been part of a team that has built a database or an EDW and are now itching to deploy all your learnings to build a brand new system, this is the perfect place for you.

Our backend infrastructure is extremely modern and fully containerized on top of Kubernetes. We're big fans of ELT and use tools like DBT and Airflow to power our data pipelines.

What you will do

  • Design, build and deploy multiple critical services, integrations and data pipelines that power the Falkon analytics system, processing many terabytes of data and producing billions of data points every day.
  • Build the large-data processing infrastructure that enables Falkon to scale horizontally.
  • Build the next generation business metrics analysis platform from scratch.
  • Help shape Falkon's culture, and build the workplace of your dreams.

What you will need to succeed

  • Alignment on Falkon principles - Think big, Deliver results with urgency, Be radically transparent, Follow the golden rule, Get better every day
  • Ability to operate with autonomy in highly ambiguous situations.
  • Prior experience building programmable data processing infrastructure using tools like Spark in a production environment.
  • Experience working a fully-containerized service architecture on top of Kubernetes.
  • 5+ years of experience building and deploying large-scale distributed systems.
  • Solid computer science fundamentals.
  • A willingness to put in the hard work and long hours it takes to make a startup successful.

Very nice-to-haves:

  • Prior experience on the development teams for Redshift, Snowflake, Apache Spark, Apache Storm or any other distributed data warehouse.
  • Experience building very large data ingestion/processing pipelines.
  • Experience developing/deploying/running metrics processing and monitoring systems
  • Experience working with data scientists on machine learning pipelines

If you're interested in rapid career growth, there is no better place to be than Falkon.

Growth comes from Impact x Learning

At Falkon you'll do your best work, develop new skills, learn from the best, discover what technical areas you're truly passionate about and help our customers grow their businesses. If you're interested in starting your own business, you'll get the opportunity to see how a venture-funded business is built from the ground up. As an early and critical member of our growing team you will help shape our business, our processes and our culture.