|Job Title: Data Engineer|
|Tech Job Family: DACI|
|• Bachelor's Degree in Engineering, Computer Science, CIS, or related field (or equivalent work experience in a related field)|
|• 2 years of experience in Data, BI or Platform Engineering, Data Warehousing/ETL, or Software Engineering|
|• 1 year of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC)|
|• Master's Degree in Computer Science, CIS, or related field|
|• 2 years of IT experience developing and implementing business systems within an organization|
|• 4 years of experience working with defect or incident tracking software|
|• 4 years of experience with technical documentation in a software development environment|
|• 2 years of experience working with an IT Infrastructure Library (ITIL) framework|
|• 2 years of experience leading teams, with or without direct reports|
|• Experience with application and integration middleware|
|• Experience with database technologies|
|• 2 years of experience in Hadoop or any Cloud Bigdata components (specific to the Data Engineering role)|
|• Expertise in Java/Scala/Python, SQL, Scripting, Teradata, Hadoop (Sqoop, Hive, Pig, Map Reduce), Spark (Spark Streaming, MLib), Kafka or equivalent Cloud Bigdata components (specific to the Data Engineering role)|
|• Expertise in MicroStrategy/Power BI/SQL, Scripting, Teradata or equivalent RDBMS, Hadoop (OLAP on Hadoop), Dashboard development, Mobile development (specific to the BI Engineering role)|
|• 2 years of experience in Hadoop, NO-SQL, RDBMS or any Cloud Bigdata components, Teradata, MicroStrategy (specific to the Platform Engineering role)|
|• Expertise in Python, SQL, Scripting, Teradata, Hadoop utilities like Sqoop, Hive, Pig, Map Reduce, Spark, Ambari, Ranger, Kafka or equivalent Cloud Bigdata components (specific to the Platform Engineering role)|
|Lowe’s is an equal opportunity employer and administers all personnel practices without regard to race, color, religion, sex, age, national origin, disability, sexual orientation, gender identity or expression, marital status, veteran status, genetics or any other category protected under applicable law.|
As a part of the Data Science & Analytics team at Rupifi, you will play a significant role in helping define the business/product vision and deliver it from the ground up by working with passionate
high-performing individuals in a very fast-paced working environment.
You will work closely with Data Scientists & Analysts, Engineers, Designers, Product Managers, Ops Managers and Business Leaders, and help the team make informed data-driven decisions and deliver high business impact.
1. Use statistical and machine learning techniques to create scalable risk management systems
2. Design, develop and evaluate highly innovative models for risk management
3. Establish scalable, efficient and automated processes for model development, model
validation and model implementation
4. Analyse data to better understand potential risks, concerns and outcomes of decisions
5. Aggregate data from multiple sources to provide a comprehensive assessment
6. Create reports, presentations and process documents to display impactful results
7. Collaborate with other team members to effectively analyze and present data
8. Develop insights and data visualizations to solve complex problems and communicate ideas to stakeholders
● Hands-on experience in Python/R & SQL
● Hands-on experience in Machine & Deep Learning area (e.g., gradient boosting machine, XGBoost, neural network), AI, and deep learning as well as classic statistical modeling techniques and assumptions
● Experience in handling complex and large data sources
● Experience in modeling techniques in the fintech/banking domain
● Experience in working on Big data and distributed computing
● Bachelors / Masters's degree in Maths, Data Science, Computer Science, Engineering, Statistics, Economics or similar quantitative field
● 4 to 10 years of modeling experience in the fintech/banking domain in fields like collections, underwriting, customer management, etc.
● Strong analytical skills with good problem-solving ability
● Strong presentation and communication skills
● Experience in working on advanced machine learning techniques
● Quantitative and analytical skills with a demonstrated ability to understand new analytical concepts
Roles and Responsibilities:
- Verify, review and rectify questions end-to-end in the creation cycle, this would be for
all difficulty levels and across multiple programming languages of coding questions.
- Review, validate and correct test cases that belong to a particular question. Make
- Document and report the quality parameters and suggest a continuous improvement.
- Help the team with writing or generating code stubs wherever necessary for a coding
question in one of the programming languages like C, C++, Java, and Python. (A code
stub is a partial code to help candidates start off with, it’s a starter code-snippet)
- Identify and rectify technical errors in coding questions and ensure that questions meet
- Working with Product Manager to research on latest technologies, trends, and
assessments in coding.
- Bring an innovative approach to the ever-changing world of programming languages and
framework-based technologies like ReactJS, Angular, Spring Boot, DOT NET.
- 0 -3 Years of experience in writing codes either in C, C++, C#, Java or Python
- Good to have: Knowledge of Manual QA and lifecycle of a QA
- Ability to understand algorithms and Data Structures.
- Candidates with exposure to ReactJS, Java Springboot, AI/ML will also be a good fit.
- Analytical and problem-solving skills by understanding complex problems.
- Experience on any competitive coding Platform is an added advantage.
- Passion about technology.
- Degree related to Computer Science: MCA, B.E., B.Tech, B.Sc
- 3+ years experience in practical implementation and deployment of ML based systems preferred.
- BE/B Tech or M Tech (preferred) in CS/Engineering with strong mathematical/statistical background
- Strong mathematical and analytical skills, especially statistical and ML techniques, with familiarity with different supervised and unsupervised learning algorithms
- Implementation experiences and deep knowledge of Classification, Time Series Analysis, Pattern Recognition, Reinforcement Learning, Deep Learning, Dynamic Programming and Optimisation
- Experience in working on modeling graph structures related to spatiotemporal systems
- Programming skills in Python
- Experience in developing and deploying on cloud (AWS or Google or Azure)
- Good verbal and written communication skills
- Familiarity with well-known ML frameworks such as Pandas, Keras, TensorFlow
The data science team is responsible for solving business problems with complex data. Data complexity could be characterized in terms of volume, dimensionality and multiple touchpoints/sources. We understand the data, ask fundamental-first-principle questions, apply our analytical and machine learning skills to solve the problem in the best way possible.
Our ideal candidate
The role would be a client facing one, hence good communication skills are a must.
The candidate should have the ability to communicate complex models and analysis in a clear and precise manner.
The candidate would be responsible for:
- Comprehending business problems properly - what to predict, how to build DV, what value addition he/she is bringing to the client, etc.
- Understanding and analyzing large, complex, multi-dimensional datasets and build features relevant for business
- Understanding the math behind algorithms and choosing one over another
- Understanding approaches like stacking, ensemble and applying them correctly to increase accuracy
Desired technical requirements
- Proficiency with Python and the ability to write production-ready codes.
- Experience in pyspark, machine learning and deep learning
- Big data experience, e.g. familiarity with Spark, Hadoop, is highly preferred
- Familiarity with SQL or other databases.
● Translate complex business requirements into scalable technical solutions meeting data design
standards. Strong understanding of analytics needs and proactive-ness to build generic solutions
to improve the efficiency
● Build dashboards using Self-Service tools on Kibana and perform data analysis to support
● Collaborate with multiple cross-functional teams and work
Bigdata JD :
Data Engineer – SQL, RDBMS, pySpark/Scala, Python, Hive, Hadoop, Unix
Data engineering services required:
- Builddataproducts and processes alongside the core engineering and technology team
- Collaborate with seniordatascientists to curate, wrangle, and prepare data for use in their advanced analytical models
- Integratedatafrom a variety of sources, assuring that they adhere to data quality and accessibility standards
- Modify and improvedataengineering processes to handle ever larger, more complex, and more types of data sources and pipelines
- Use Hadoop architecture and HDFS commands to design and optimizedataqueries at scale
- Evaluate and experiment with noveldataengineering tools and advises information technology leads and partners about new capabilities to determine optimal solutions for particular technical problems or designated use cases
Big data engineering skills:
- Demonstrated ability to perform the engineering necessary to acquire, ingest, cleanse, integrate, and structure massive volumes ofdatafrom multiple sources and systems into enterprise analytics platforms
- Proven ability to design and optimize queries to build scalable, modular, efficientdatapipelines
- Ability to work across structured, semi-structured, and unstructureddata, extracting information and identifying linkages across disparatedata sets
- Proven experience delivering production-readydataengineering solutions, including requirements definition, architecture selection, prototype development, debugging, unit-testing, deployment, support, and maintenance
- Ability to operate with a variety ofdataengineering tools and technologies; vendor agnostic candidates preferred
Domain and industry knowledge:
- Strong collaboration and communication skills to work within and across technology teams and business units
- Demonstrates the curiosity, interpersonal abilities, and organizational skills necessary to serve as a consulting partner, includes the ability to uncover, understand, and assess the needs of various business stakeholders
- Experience with problem discovery, solution design, and insight delivery that involves frequent interaction, education, engagement, and evangelism with senior executives
- Ideal candidate will have extensive experience with the creation and delivery of advanced analytics solutions for healthcare payers or insurance companies, including anomaly detection, provider optimization, studies of sources of fraud, waste, and abuse, and analysis of clinical and economic outcomes of treatment and wellness programs involving medical or pharmacy claimsdata, electronic medical recorddata, or other health data
- Experience with healthcare providers, pharma, or life sciences is a plus
- Gathering project requirements from customers and supporting their requests.
- Creating project estimates and scoping the solution based on clients’ requirements.
- Delivery on key project milestones in line with project Plan/ Budget.
- Establishing individual project plans and working with the team in prioritizing production schedules.
- Communication of milestones with the team and to clients via scheduled work-in-progress meetings
- Designing and documenting product requirements.
- Possess good analytical skills - detail-orientemd
- Be familiar with Microsoft applications and working knowledge of MS Excel
- Knowledge of MIS Reports & Dashboards
- Maintaining strong customer relationships with a positive, can-do attitude
SpringML is looking to hire a top-notch Senior Data Engineer who is passionate about working with data and using the latest distributed framework to process large dataset. As an Associate Data Engineer, your primary role will be to design and build data pipelines. You will be focused on helping client projects on data integration, data prep and implementing machine learning on datasets. In this role, you will work on some of the latest technologies, collaborate with partners on early win, consultative approach with clients, interact daily with executive leadership, and help build a great company. Chosen team members will be part of the core team and play a critical role in scaling up our emerging practice.
- Ability to work as a member of a team assigned to design and implement data integration solutions.
- Build Data pipelines using standard frameworks in Hadoop, Apache Beam and other open-source solutions.
- Learn quickly – ability to understand and rapidly comprehend new areas – functional and technical – and apply detailed and critical thinking to customer solutions.
- Propose design solutions and recommend best practices for large scale data analysis
- B.tech degree in computer science, mathematics or other relevant fields.
- 4+years of experience in ETL, Data Warehouse, Visualization and building data pipelines.
- Strong Programming skills – experience and expertise in one of the following: Java, Python, Scala, C.
- Proficient in big data/distributed computing frameworks such as Apache,Spark, Kafka,
- Experience with Agile implementation methodologies
Company Profile and Job Description
AthenasOwl (AO) is our “AI for Media” solution that helps content creators and broadcasters to create and curate smarter content. We launched the product in 2017 as an AI-powered suite meant for the media and entertainment industry. Clients use AthenaOwl's context adapted technology for redesigning content, taking better targeting decisions, automating hours of post-production work and monetizing massive content libraries.
For more details visit: www.athenasowl.tv
Senior Machine Learning Engineer
4 -6 Years of experience
Mumbai (Malad W)
- Develop cutting edge machine learning solutions at scale to solve computer vision problems in the domain of media, entertainment and sports
- Collaborate with media houses and broadcasters across the globe to solve niche problems in the field of post-production, archiving and viewership
- Manage a team of highly motivated engineers to deliver high-impact solutions quickly and at scale
The ideal candidate should have:
- Strong programming skills in any one or more programming languages like Python and C/C++
- Sound fundamentals of data structures, algorithms and object-oriented programming
- Hands-on experience with any one popular deep learning framework like TensorFlow, PyTorch, etc.
- Experience in implementing Deep Learning Solutions (Computer Vision, NLP etc.)
- Ability to quickly learn and communicate the latest findings in AI research
- Creative thinking for leveraging machine learning to build end-to-end intelligent software systems
- A pleasantly forceful personality and charismatic communication style
- Someone who will raise the average effectiveness of the team and has demonstrated exceptional abilities in some area of their life. In short, we are looking for a “Difference Maker”