Data Engineer (Remote)

  • Engineering
  • Remote job

Data Engineer (Remote)

Job description

About Us

vidIQ’s mission is to advance the creator's journey with actionable data-driven insights. We pursue this through our values of being creator obsessed, lean and fast, and being scientific. We have already helped millions of creators, and we are looking for stunning co-workers to join us in helping millions more.

So Why Join Us?

Our work is exciting as we are transforming the creator analytics space. This has provided many of us the opportunity to work on new and exciting projects. Equally, we’ve set up our people for success by giving them professional development opportunities like courses or conferences that will help them acquire desirable skills/experience.

Our company has met the future of work head on, with a fully remote company, capable of giving you flexibility to balance work and life. When it’s time to go on a break, we have an unlimited vacation policy so you can recharge. Lastly, we celebrate our wins and try to enjoy work by going on fun retreats to exciting destinations, such as Spain, Portugal and amazing places to come.

We are committed to diversity and inclusion . We work hard to enable creators of all kinds to succeed and, to that end, we prioritize diverse talent and an inclusive environment that encourages collaboration and creativity. We’re committed to building a company and a community where people thrive by being themselves and are inspired to do their best work every day.


What you will be doing

  • Building efficient, critical data pipelines, including ETL, partitioning data, data compaction, and AWS optimization
  • Collaborate closely with data scientists, product analysts to create the data sets that power vidIQ’s algorithms
  • Be an advocate for data quality, acquisition of new data sources, and data infrastructure tooling
  • Work closely with cross-functional teammates, including product managers, designers, product analysts, and data scientists, to deliver the highest impact to our users




Job requirements

Who you are

  • You have experience using Python for internal data pipelines (moving data inside AWS account), including numpy and pandas, additionally you have experience DynamoDB, Lambda, Redshift, and S3
  • Hands-on experience with data workflow orchestration
  • Proven track record of working with cross-functional teams in an agile-like environment
  • Ability to communicate data concepts, requirements, and risks clearly to cross-functional team members, especially product analysts, data scientists, and product managers
  • Preferably, you have experience developing ML-based products & services and/or working with Spark