Skip to Content

DevOps / SRE

RemoteUnited States, California, SFEngineering

Job description

About Us

vidIQ’s mission is to advance the creator's journey with actionable data-driven insights. We pursue this through our values of being creator obsessed, lean and fast, and being scientific. We have already helped millions of creators, and we are looking for stunning co-workers to join us in helping millions more.

So Why Join Us?

Our work is exciting as we are transforming the creator analytics space. This has provided many of us the opportunity to work on new and exciting projects. Equally, we’ve set up our people for success by giving them professional development opportunities like courses or conferences that will help them acquire desirable skills/experience.

Our company has met the future of work head on, with a fully remote company, capable of giving you flexibility to balance work and life. When it’s time to go on a break, we have an unlimited vacation policy so you can recharge. Lastly, we celebrate our wins and try to enjoy work by going on fun retreats to exciting destinations, such as Spain, Portugal and amazing places to come.

We are committed to diversity and inclusion . We work hard to enable creators of all kinds to succeed and, to that end, we prioritize diverse talent and an inclusive environment that encourages collaboration and creativity. We’re committed to building a company and a community where people thrive by being themselves and are inspired to do their best work every day.

What you will be doing

  • Mange infrastructure routines (access, plan maintenance, optimize resources/prices)
  • Collaborate closely with the platform team
  • Be an advocate for infrastructure quality, security, tooling
  • Work closely with cross-functional teammates (backend engineers, data engineers, ML engineers)

Required technical skills

  • At least 5 years of on-hands experience with AWS:

  • EC2, IAM, DynamoDB, RDS, ElasticCache, EKS, Kinesis Firehose, S3, VPC, Lambdas, Route53, Price Optimization.

  • Good understanding of networks.

  • Strong knowledge in setting up proper security within the growing organization (accesses, SSH key rotation, ...)

  • Experience in setting up monitoring (Grafana/Prometheus) and alerting for different kinds of systems.

  • Experience with CI/CD tools.

  • Knowledge of Python, bash/shell, Chef, Docker, Terraform, Kubernetes.

  • Experience with Postgres, MongoDB, Redis, ElasticSearch (management, tuning, backups)

Required non-technical skills

  • Good writing skills (documentation, instructions, tutorials, reports)

  • Strong analytical skills — ability to decompose complex problems into smaller pieces

  • Ability to deliver observable results iteratively

  • Understanding of strategic goals and their translation onto the short-term goals

  • You know how to work in distributed teams

Job requirements

Who you are

  • You have experience using Python for internal data pipelines (moving data inside AWS account), including numpy and pandas, additionally you have experience DynamoDB, Lambda, Redshift, and S3
  • Hands-on experience with data workflow orchestration
  • Proven track record of working with cross-functional teams in an agile-like environment
  • Ability to communicate data concepts, requirements, and risks clearly to cross-functional team members, especially product analysts, data scientists, and product managers
  • Preferably, you have experience developing ML-based products & services and/or working with Spark