<link id='css--app'rel="stylesheet" href="/dist/css/app.min.css"> Opportunities – Trilogy

Opportunities

Staff Data Engineer at Lookout
Bangalore, IN
Lookout is a cybersecurity company that makes it possible for individuals and enterprises to be both mobile and secure. With 100 million mobile sensors fueling a dataset of virtually all the mobile code in the world, the Lookout Security Cloud can identify connections that would otherwise go unseen -- predicting and stopping mobile attacks before they do harm. The world’s leading mobile network operators, including AT&T, Deutsche Telekom, EE, KDDI, Orange, Sprint, T-Mobile and Telstra, have selected Lookout as its preferred mobile security solution. Lookout is also partnered with such enterprise leaders as AirWatch, Ingram Micro and MobileIron. Headquartered in San Francisco, Lookout has offices in Amsterdam, Boston, London, Sydney, Tokyo, Toronto and Washington, D.C. To learn more, visit www.lookout.com

Our Data Engineering team is transforming how we build products using Lookout’s unique data sets about mobile devices, applications and threats. As we continue to grow and scale, we need rock solid engineers who love the challenge of designing and building high performance, scalable data solutions that help Lookout protect millions of mobile users. You’ll design, develop, and test robust, scalable data platform components. You’ll work with a variety of teams and individuals, including Product Engineers to understand their data pipeline needs and come up with innovative solutions. By collaborating with our talented team of Engineers, Product Managers and Designers, you’ll be a driving force in defining new data products and features. We are looking for someone with a strong software engineering, distributed data systems and ETL background.

 Responsibilities:

    Design and development of the next generation of our data platform including data streaming, batch and replay capabilities
    Working with Engineering, Data Science, Business Intelligence and Product Management teams to build and manage a wide variety of data sets
    Analyse technical and business requirements to determine the best technologies and approaches for solving problems
    Identify gaps and build tools to increase the speed of analysis
    Design, build and launch new data models and business critical ETL pipelines
    Fully participate in the ownership of your services and components, including on-call duties

Requirements:

    BS/MS in Computer Science or related field/degree, and/or equivalent work experience
    10+ years overall software development experience with at least 3+ years of experience with data engineering
    Experience with Spark (Batch and/or Streaming), Hive and Hadoop
    Experience with Kafka or equivalent messaging systems at scale
    Experience with streaming data pipelines using Spark streaming
    Proficient in designing efficient and robust ETL workflows
    Experience in optimising Spark/Hive ETLs
    Hands on experience with GCP and services such as Dataproc, BigQuery, BigTable, and Cloud Composer (Apache Airflow)
    Experience building automated deployment pipelines for data infrastructure
    Excellent communication and collaboration skills
    Proficient in Scala and Python

Bonus Points:

    You have built a data pipeline and the infrastructure required to deploy machine learning algorithms and real-time analytics in low latency environments
    Understanding of CI/CD automation and willingness to learn new CD platform.