We are #hiring for AWS Data Engineer expert to join our team
Job Title: AWS Data Engineer
Experience: 5 Yrs to 10Yrs
Location: Remote
Notice: Immediate or Max 20 Days
Role: Permanent Role
Skillset: AWS, ETL, SQL, Python, Pyspark, Postgres DB, Dremio.
Job Description:
Able to develop ETL jobs.
Able to help with data curation/cleanup, data transformation, and building ETL pipelines.
Strong Postgres DB exp and knowledge of Dremio data visualization/semantic layer between DB and the application is a plus.
Sql, Python, and Pyspark is a must.
Communication should be good
Similar jobs
Job Title -Data Scientist
Job Duties
- Data Scientist responsibilities includes planning projects and building analytics models.
- You should have a strong problem-solving ability and a knack for statistical analysis.
- If you're also able to align our data products with our business goals, we'd like to meet you. Your ultimate goal will be to help improve our products and business decisions by making the most out of our data.
Responsibilities
Own end-to-end business problems and metrics, build and implement ML solutions using cutting-edge technology.
Create scalable solutions to business problems using statistical techniques, machine learning, and NLP.
Design, experiment and evaluate highly innovative models for predictive learning
Work closely with software engineering teams to drive real-time model experiments, implementations, and new feature creations
Establish scalable, efficient, and automated processes for large-scale data analysis, model development, deployment, experimentation, and evaluation.
Research and implement novel machine learning and statistical approaches.
Requirements
2-5 years of experience in data science.
In-depth understanding of modern machine learning techniques and their mathematical underpinnings.
Demonstrated ability to build PoCs for complex, ambiguous problems and scale them up.
Strong programming skills (Python, Java)
High proficiency in at least one of the following broad areas: machine learning, statistical modelling/inference, information retrieval, data mining, NLP
Experience with SQL and NoSQL databases
Strong organizational and leadership skills
Excellent communication skills
Hiring for Azure Data Engineers.
Location: Bangalore
Employment type: Full-time, permanent
website: www.amazech.com
Qualifications:
B.E./B.Tech/M.E./M.Tech in Computer Science, Information Technology, Electrical or Electronic with good academic background.
Experience and Required Skill Sets:
• Minimum 5 years of hands-on experience with Azure Data Lake, Azure Data Factory, SQL Data Warehouse, Azure Blob, Azure Storage Explorer
• Experience in Data warehouse/analytical systems using Azure Synapse.
Proficient in creating Azure Data Factory pipelines for ETL processing; copy activity, custom Azure development, Synapse, etc.
• Knowledge of Azure Data Catalog, Event Grid, Service Bus, SQL, and Purview.
• Good technical knowledge in Microsoft SQL Server BI Suite (ETL, Reporting, Analytics, Dashboards) using SSIS, SSAS, SSRS, Power BI
• Design and develop batch and real-time streaming of data loads to data warehouse systems
Other Requirements:
A Bachelor's or Master's degree (Engineering or computer-related degree preferred)
Strong understanding of Software Development Life Cycles including Agile/Scrum
Responsibilities:
• Ability to create complex, enterprise-transforming applications that meet and exceed client expectations.
• Responsible for the bottom line. Strong project management abilities. Ability to encourage the team to stick to timelines.
Skills and requirements
- Experience analyzing complex and varied data in a commercial or academic setting.
- Desire to solve new and complex problems every day.
- Excellent ability to communicate scientific results to both technical and non-technical team members.
Desirable
- A degree in a numerically focused discipline such as, Maths, Physics, Chemistry, Engineering or Biological Sciences..
- Hands on experience on Python, Pyspark, SQL
- Hands on experience on building End to End Data Pipelines.
- Hands on Experience on Azure Data Factory, Azure Data Bricks, Data Lake - added advantage
- Hands on Experience in building data pipelines.
- Experience with Bigdata Tools, Hadoop, Hive, Sqoop, Spark, SparkSQL
- Experience with SQL or NoSQL databases for the purposes of data retrieval and management.
- Experience in data warehousing and business intelligence tools, techniques and technology, as well as experience in diving deep on data analysis or technical issues to come up with effective solutions.
- BS degree in math, statistics, computer science or equivalent technical field.
- Experience in data mining structured and unstructured data (SQL, ETL, data warehouse, Machine Learning etc.) in a business environment with large-scale, complex data sets.
- Proven ability to look at solutions in unconventional ways. Sees opportunities to innovate and can lead the way.
- Willing to learn and work on Data Science, ML, AI.
Must Have Skills:
- Managing and designing the reporting environment, including data sources, security, and metadata.
- Preparing reports for executive leadership that effectively communicate trends, patterns, and predictions using relevant data
- Establish KPIs to measure the effectiveness of business decisions.
- Work with management to prioritize business and information needs.
- Provide data solutions, tools, and capabilities to enable self-service frameworks for data consumers
- Provide expertise and translate the business needs to design; and develop tools, techniques, and metrics, and dashboards for insights and data visualization.
- Responsible for developing and executing tools to monitor and report on data quality.
- Responsible for establishing appreciation and adherence to the principles of data quality management, including metadata, lineage, and business definitions
- Provide support to Tech teams in managing security mechanisms and data access governance
- Provides technical support and mentoring and training to less senior analysts.
- Derive insights through A/B tests, funnel analysis, and user segmentation
Key Criteria:
- 3+ years in a data analyst position, preferably working as a Data Analyst in a fast-paced and dynamic business setting.
- Strong SQL-based querying languages (MYSQL, PostgreSQL) and Excel skills with the ability to learn other analytic tools.
- Understanding of Scripting experience in (Python, Perl, JavaScript, Shell);
- Skilled in statistical and econometric modeling, performing quantitative analysis, and technological data mining and analysis techniques.
- This role requires a mixture of data schema knowledge and technical writing activities paired with hands-on and collaborative work with Systems Analysts. Technical exposure through requirements, QA, or development software lifecycles are also a plus
- Demonstrated analytical skills. Ability to work with large amounts of data: facts, figures, and number crunching. Ability to see through the data and analyze it to find conclusions.
- Excellent attention to detail. Data needs to be precise. Conclusions drawn from data analysis will drive critical client decisions
Domain knowledge in the Internet of Things is a plus - Managing a junior team of analysts. It is crucial that they have exceptional writing and verbal communication skills to perform their job duties and manage others.
- B.E/ B.Tech./ M. E/ M. Tech from any recognized university in India.
- Minimum 60% in Graduation or Post-Graduation
- SQL knowledge and hands-on experience is a must.
- Great interpersonal and communication skill
Data Platform engineering at Uber is looking for a strong Technical Lead (Level 5a Engineer) who has built high quality platforms and services that can operate at scale. 5a Engineer at Uber exhibits following qualities:
- Demonstrate tech expertise › Demonstrate technical skills to go very deep or broad in solving classes of problems or creating broadly leverageable solutions.
- Execute large scale projects › Define, plan and execute complex and impactful projects. You communicate the vision to peers and stakeholders.
- Collaborate across teams › Domain resource to engineers outside your team and help them leverage the right solutions. Facilitate technical discussions and drive to a consensus.
- Coach engineers › Coach and mentor less experienced engineers and deeply invest in their learning and success. You give and solicit feedback, both positive and negative, to others you work with to help improve the entire team.
- Tech leadership › Lead the effort to define the best practices in your immediate team, and help the broader organization establish better technical or business processes.
What You’ll Do
- Build a scalable, reliable, operable and performant data analytics platform for Uber’s engineers, data scientists, products and operations teams.
- Work alongside the pioneers of big data systems such as Hive, Yarn, Spark, Presto, Kafka, Flink to build out a highly reliable, performant, easy to use software system for Uber’s planet scale of data.
- Become proficient of multi-tenancy, resource isolation, abuse prevention, self-serve debuggability aspects of a high performant, large scale, service while building these capabilities for Uber's engineers and operation folks.
What You’ll Need
- 7+ years experience in building large scale products, data platforms, distributed systems in a high caliber environment.
- Architecture: Identify and solve major architectural problems by going deep in your field or broad across different teams. Extend, improve, or, when needed, build solutions to address architectural gaps or technical debt.
- Software Engineering/Programming: Create frameworks and abstractions that are reliable and reusable. advanced knowledge of at least one programming language, and are happy to learn more. Our core languages are Java, Python, Go, and Scala.
- Data Engineering: Expertise in one of the big data analytics technologies we currently use such as Apache Hadoop (HDFS and YARN), Apache Hive, Impala, Drill, Spark, Tez, Presto, Calcite, Parquet, Arrow etc. Under the hood experience with similar systems such as Vertica, Apache Impala, Drill, Google Borg, Google BigQuery, Amazon EMR, Amazon RedShift, Docker, Kubernetes, Mesos etc.
- Execution & Results: You tackle large technical projects/problems that are not clearly defined. You anticipate roadblocks and have strategies to de-risk timelines. You orchestrate work that spans multiple teams and keep your stakeholders informed.
- A team player: You believe that you can achieve more on a team that the whole is greater than the sum of its parts. You rely on others’ candid feedback for continuous improvement.
- Business acumen: You understand requirements beyond the written word. Whether you’re working on an API used by other developers, an internal tool consumed by our operation teams, or a feature used by millions of customers, your attention to details leads to a delightful user experience.
The Job
The Architect, Machine Learning and Artificial Intelligence including Computer Vision will grow and lead a team of talented Machine Learning (ML), Computer Vision (CV) and Artificial Intelligence (AI) researchers and engineers to develop innovative machine learning algorithms, scalable ML system, and AI applications for Racetrack. This role will be focused on developing and deploying personalization and recommender system, search, experimentation, audience, and content AI solutions to drive user experience and growth.
The Daily
- Develop innovative data science solutions that utilize machine learning and deep learning algorithms, statistical and quantitative modelling approaches to support product, engineering, content, and marketing initiatives.
- Build and lead a world-class team of ML and AI scientists and engineers.
- Be a hands-on leader to mentor the team in latest machine learning and deep learning approaches, and to introduce new technologies and processes. Single headedly manage the MVP and PoCs
- Work with ML engineers to design solution architecture and develop scalable machine learning system to accelerate learning cycle.
- Identify data science opportunities that deliver business value.
- Develop ML/AI/CV roadmap and educate both internal and external stakeholders at all levels to drive implementation and measurement.
- Hands on experience in Image processing for auto industry
- BFSI domain knowledge is a plus
- Provide thought leadership to enable ML/AI applications.
- Manage products priorities and ensure timely delivery.
- Develop and evangelize best practices for scoping, building, validating, deploying, and monitoring ML/AI products.
- Prepare and present ML modelling results and analytical insights that help drive the business to senior leadership.
The Essentials
- 8 + years of work experience in Machine Learning, AI and Data Science with a proven track record to drive innovation and business impacts
- 4 + years of managing a team of data scientists, ML and AI researchers and engineers
- Strong machine learning, deep learning, and statistical modelling expertise, such as causal inference modelling, ensembles, neural networks, reinforcement learning, NLP, and computer vision
- Advanced knowledge of SQL and experience with big data platform (AWS, Snowflake, Spark, Google Cloud etc.)
- Proficiency in machine learning and deep learning languages and platforms (Python, R, TensorFlow, Keras, PyTorch, MXNet etc.)
- Experience in deploying machine learning algorithms and advanced modelling solutions
- Experience in developing advanced analytics and ML infrastructure and system
- Self-starter and self-motivated with the proven ability to deliver results in a fast-paced, high-energy environment
- Strong communication skills and the ability to explain complex analysis and algorithms to non-technical audience
- Works effectively cross functional teams to build trusted partnership
- Working experience in digital media and entertainment industry preferred
- Experience with Agile methodologies preferred
Responsibilities
- Research and test novel machine learning approaches for analysing large-scale distributed computing applications.
- Develop production-ready implementations of proposed solutions across different models AI and ML algorithms, including testing on live customer data to improve accuracy, efficacy, and robustness
- Work closely with other functional teams to integrate implemented systems into the SaaS platform
- Suggest innovative and creative concepts and ideas that would improve the overall platform
Qualifications
The ideal candidate must have the following qualifications:
- 5 + years experience in practical implementation and deployment of large customer-facing ML based systems.
- MS or M Tech (preferred) in applied mathematics/statistics; CS or Engineering disciplines are acceptable but must have with strong quantitative and applied mathematical skills
- In-depth working, beyond coursework, familiarity with classical and current ML techniques, both supervised and unsupervised learning techniques and algorithms
- Implementation experiences and deep knowledge of Classification, Time Series Analysis, Pattern Recognition, Reinforcement Learning, Deep Learning, Dynamic Programming and Optimization
- Experience in working on modeling graph structures related to spatiotemporal systems
- Programming skills in Python is a must
- Experience in developing and deploying on cloud (AWS or Google or Azure)
- Good verbal and written communication skills
- Familiarity with well-known ML frameworks such as Pandas, Keras, TensorFlow
Most importantly, you should be someone who is passionate about building new and innovative products that solve tough real-world problems.
Location
Chennai, India