<link id='css--app'rel="stylesheet" href="/dist/css/app.min.css"> Opportunities – Trilogy


Data Engineer at Lookout
Bangalore, IN

Lookout is a cybersecurity company for the post-perimeter, cloud-first, mobile-first world. Powered by the largest dataset of mobile code in existence, the Lookout Security Cloud provides visibility into the entire spectrum of mobile risk. Lookout is trusted by hundreds of millions of individual users, enterprises and government agencies and partners such as AT&T, Verizon, Vodafone, Microsoft, Apple and others. Headquartered in San Francisco, Lookout has offices in Amsterdam, Boston, India, London, Sydney, Tokyo, Toronto and Washington, D.C. To learn more, visit www.lookout.com and follow Lookout on its blogLinkedIn, and Twitter

Lookout is a modern startup for the modern world, run by apps! As part of Lookout’s engineering team, you will have an opportunity to take on some of the most interesting challenges in one or more core areas of intellectual property and fundamental building blocks that form Lookout’s category-defining Personal and Enterprise products. In order to tackle these challenging problems, you must be open-minded to explore new  areas as well as evolve key existing systems, such as high scale cloud systems, mobile platforms(iOS/Android) development, detection engines, analysis systems cloud backend micro-services, front-end/UI, Data Engineering, Machine Learning, Threat research and CI/CD. If you enjoy building cutting edge products leveraging the latest technologies, tools and development methodologies, and want to make an immediate impact through your work, come check us out.


  • Design, build & maintain reliable systems for storing, transforming, and analyzing large amounts of data
  • Manage infrastructure for batch and streaming data processing pipelines, including CI/CD tooling, monitoring, performance analysis
  • Build and validate data pipelines using Apache Spark
  • Work with analysts and engineers across the company to meet quality and timeliness goals for data products


  • B.S. in Computer Science or related experience
  • 3+ years experience designing, implementing, and maintaining complex software systems
  • Programming experience on Python, Scala, and/or Java
  • Experience with Big Data processing systems such as Hadoop, Spark, Kafka
  • Cloud experience with AWS (preferred), GCP, or Azure
  • Experience with agile software development, code reviews, git, and task management through tools like Jira.


  • Experience with ETL workflow & scheduling systems such as Airflow and/or Luigi
  • Knowledge of configuration & deployment