We are growing quickly at TechVerito and are looking for a Data Engineer with progressive experience using AWS, Spark and Deployment tools. This is a great opportunity to work with a company that has a strong and innovative team dedicated to improving the spend management processes of today’s dynamic businesses. We take care of our employees every way we can, with competitive compensation packages, as well as a flexible working environment, mentorship programs and much more!We educate, advocate people to follow Test Driven Development process, Behaviour Driven Development, Agile Manifesto, eXtreme programming, Reactive and Responsive Programming. We love to write clean, quality, readable and maintainable code.We are personally accountable for delivering on our commitments. We value our people, encourage their development and reward their performance. We work together, across boundaries, to meet the needs of our customers and to help our Company win.If you are looking for a change this is what we can promise you:You will have challenging problems to solve every single day.You will have the flexibility to solve problems and deliver solutions.We will provide a highly collaborative and enjoyable working environment with skilled and super friendly teammates.We will give freedom to learn, innovate and commit mistakes (as long as we learn from them :))We will fully support you in developing software the right way following clean coding and software development principles.We won't burden you with useless policies and procedures.We will provide you with the tools you need to do your job right."If you’re already a great coder with passion and curiosity then this role is right for you."Here are some more details about the position: Qualifications Required:- 3+ years of work experience showing growth as a Data Engineer. Hands-on with AWS and Spark. Experience developing in AWS or similar cloud platforms. Preferred: AWS Lambda, ECS, S3, EMR, DynamoDB, Aurora, Redshift, or similar. Ability to work on multiple areas like Data pipeline ETL, Data modelling & design, writing complex SQL queries etc. Familiarity with high volume transactional systems, microservice design, or data processing pipelines (Spark). Knowledge and hands-on experience with serverless technologies such as Lambda a plus. Experience in handling high traffic web services and solving scaling issues. Expertise in practices like Agile, Peer reviews, Continuous Integration, Continuous Delivery. Role & Responsibilities: Translate business requirements and source system understanding into technical solutions using BigData/Java/Python etc Capable of planning and executing on both short-term and long-term goals individually and with the team Be a champion of the overall strategy for data governance, security, privacy, quality and retention that will satisfy business policies and requirements Identify, document and promote best practices. Refactor existing solutions to make it reusable and scalable. Collaborate with different teams on projects spanning across multiple teams. Thrive in self-motivated internal-innovation driven environment.
Job Description In this role you will help us build, improve and maintain our huge data infrastructure where we collect TB's of logs daily. Data driven decisioning is crucial to the success of our customers and this role is central to ensuring we have a cutting edge data infrastructure to do things faster, better, and cheaper! Experience 1 - 3 Years Required Skills -Must be a polyglot with good command over Java, Scala and a scripting language -A non trivial project experience in distributed computing frameworks like Apache Spark/Hadoop/Pig/Kafka/Storm with sound knowledge of their internals -Expert knowledge of relational databases like MYSQL, and in-memory data stores like Redis -Regular participation in coding/hacking contests like Top-Coder, Code-Jam and Hacker-Cup is a huge plus Pre requisites -Strong analytical skills and solid foundation in Computer Science fundamentals specially in -DataStructures/Algorithms, Object Oriented principles, Operating Systems, Computer Networks -Ability and willingness to take ownership and work under minimum supervision, independently or as a part of a team -Passion for innovation and "Never Say Die" attitude -Strong verbal and written communication skills Education BTech/M.Tech/MS/Dual in Computer Science with above average academic credentials
Job Description: The Data Engineering team is one of the core technology teams of Lumiq.ai and is responsible for creating all the Data related products and platforms which scale for any amount of data, users, and processing. The team also interacts with our customers to work out solutions, create technical architectures and deliver the products and solutions. If you are someone who is always pondering how to make things better, how technologies can interact, how various tools, technologies, and concepts can help a customer or how a customer can use our products, then Lumiq is the place of opportunities. Who are you? Enthusiast is your middle name. You know what’s new in Big Data technologies and how things are moving Apache is your toolbox and you have been a contributor to open source projects or have discussed the problems with the community on several occasions You use cloud for more than just provisioning a Virtual Machine Vim is friendly to you and you know how to exit Nano You check logs before screaming about an error You are a solid engineer who writes modular code and commits in GIT You are a doer who doesn’t say “no” without first understanding You understand the value of documentation of your work You are familiar with Machine Learning Ecosystem and how you can help your fellow Data Scientists to explore data and create production-ready ML pipelines Eligibility At least 2 years of Data Engineering Experience Have interacted with Customers Must Have Skills: Amazon Web Services (AWS) - EMR, Glue, S3, RDS, EC2, Lambda, SQS, SES Apache Spark Python Scala PostgreSQL Git Linux Good to have Skills: Apache NiFi Apache Kafka Apache Hive Docker Amazon Certification