About the Role We are looking for a Data Engineer to help us scale the existing data infrastructure and in parallel work on building the next generation data platform for analytics at scale, machine learning infrastructure and data validation systems.In this role, you will be responsible for communicating effectively with data consumers to fine-tune data platform systems (existing or new), taking ownership and delivering high performing systems and data pipelines, and helping the team scale them up, to endure ever growing traffic.This is a growing team, which makes for many opportunities to be involved directly with product management, development, sales, and support teams. Everybody on the team is passionate about their work and we’re looking for similarly motivated “get stuff done” kind of people to join us! Roles & Responsibilities Engineer data pipelines (batch and real-time ) that aids in creation of data-driven products for our platform Design, develop and maintain a robust and scalable data-warehouse and data lake Work closely alongside Product managers and data-scientists to bring the various datasets together and cater to our business intelligence and analytics use-cases Design and develop solutions using data science techniques ranging from statistics, algorithms to machine learning Perform hands-on devops work to keep the Data platform secure and reliable Skills Required Bachelor's degree in Computer Science, Information Systems, or related engineering discipline 6 + years’ experience with ETL, Data Mining, Data Modeling, and working with large-scale datasets 6+ years’ experience with an object-oriented programming language such as Python, Scala, Java, etc Extremely proficient in writing performant SQL working with large data volumes Experience with map-reduce, Spark, Kafka, Presto, and the ecosystem. Experience in building automated analytical systems utilizing large data sets. Experience with designing, scaling and optimizing cloud based data warehouses (like AWS Redshift) and data lakes Familiarity with AWS technologies preferred Qualification – B.Tech/M.Tech/MCA(IT/Computer Science) Years of Exp – 6-9
Vervali is seeking Data Engineer for Thane, Mumbai Salary: 20 to 30% Hike Notice Period: Immediate or 10 to 20 Days Qualification: BE/B Tech in Computer Science/Information Technology Relevant experience: 4-5 Years of experience in Programming and data transformation tools for ETL. Key Responsibilities: Data warehouse development and ETL design Program using Python, R technologies Build and maintain SQL procedures and ETL processes Design and implement a technical vision for client project Must have: Qualified individuals possess the attributes of being smart, curious,committed to vision, passionate, fun/pleasant, an achiever and having a sense of urgency 5+ years of data focused software development and design experience 3+ years of experience designing and developing ETL solutions using Informatica PowerCenter 9.x version, Matillion, Talend Experience designing and developing database solutions using SQL server and/or Cloud database solutions (Hadoop, Redshift, Snowflake, BigQuery, MySQL, etc.) 3+ years of experience with Python, Bash shell scripting experience with cloud technologies – AWS, GCP, Azure is a plus. Thanks & Regards, Darshit Mandavia
Do NOT apply if you are :- Want to be a Power Bi, Qlik, or Tableau only developer.- A machine learning aspirant- A data scientist- Wanting to write Python scripts- Want to do AI - Want to do 'BIG' data- Want to do HADOOP- Fresh GraduateApply if you :- Write SQL for complicated analytical queries . - Understand existing business problem of the client and map their needs to the schema that they have.-Can neatly disassemble the problem into components and solve the needs by using SQL. - Have worked on existing BI products.Develop solutions with our exciting new BI product for our clients.You should be very experienced and comfortable with writing SQL against very complicated schema to help answer business questions.Have an analytical thought process.
About the job: - You will work with data scientists to architect, code and deploy ML models - You will solve problems of storing and analyzing large scale data in milliseconds - architect and develop data processing and warehouse systems - You will code, drink, breathe and live python, sklearn and pandas. It’s good to have experience in these but not a necessity - as long as you’re super comfortable in a language of your choice. - You will develop tools and products that provide analysts ready access to the data About you: - Strong CS fundamentals - You have strong experience in working with production environments - You write code that is clean, readable and tested - Instead of doing it second time, you automate it - You have worked with some of the commonly used databases and computing frameworks (Psql, S3, Hadoop, Hive, Presto, Spark, etc) - It will be great if you have one of the following to share - a kaggle or a github profile - You are an expert in one or more programming languages (Python preferred). Also good to have experience with python-based application development and data science libraries. - Ideally, you have 2+ years of experience in tech and/or data. - Degree in CS/Maths from Tier-1 institutes.
JOB DESCRIPTION: We are looking for a Data Engineer with a solid background in scalable systems to work with our engineering team to improve and optimize our platform. You will have significant input into the team’s architectural approach and execution. We are looking for a hands-on programmer who enjoys designing and optimizing data pipelines for large-scale data. This is NOT a "data scientist" role, so please don't apply if you're looking for that. RESPONSIBILITIES: 1. Build, maintain and test, performant, scalable data pipelines 2. Work with data scientists and application developers to implement scalable pipelines for data ingest, processing, machine learning and visualization 3. Building interfaces for ingest across various data stores MUST-HAVE: 1. A track record of building and deploying data pipelines as a part of work or side projects 2. Ability to work with RDBMS, MySQL or Postgres 3. Ability to deploy over cloud infrastructure, at least AWS 4. Demonstrated ability and hunger to learn GOOD-TO-HAVE: 1. Computer Science degree 2. Expertise in at least one of: Python, Java, Scala 3. Expertise and experience in deploying solutions based on Spark and Kafka 4. Knowledge of container systems like Docker or Kubernetes 5. Experience with NoSQL / graph databases: 6. Knowledge of Machine Learning Kindly apply only if you are skilled in building data pipelines.