Job Description: Data Engineer
We are looking for a curious Data Engineer to join our extremely fast-growing Tech Team at StanPlus
About RED.Health (Formerly Stanplus Technologies)
Get to know the team:
Join our team and help us build the world’s fastest and most reliable emergency response system using cutting-edge technology.
Because every second counts in an emergency, we are building systems and flows with 4 9s of reliability to ensure that our technology is always there when people need it the most. We are looking for distributed systems experts who can help us perfect the architecture behind our key design principles: scalability, reliability, programmability, and resiliency. Our system features a powerful dispatch engine that connects emergency service providers with patients in real-time
.
Key Responsibilities
● Build Data ETL Pipelines
● Develop data set processes
● Strong analytic skills related to working with unstructured datasets
● Evaluate business needs and objectives
● Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery
● Interpret trends and patterns
● Work with data and analytics experts to strive for greater functionality in our data system
● Build algorithms and prototypes
● Explore ways to enhance data quality and reliability
● Work with the Executive, Product, Data, and D esign teams, to assist with data-related technical issues and support their data infrastructure needs.
● Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
Key Requirements
● Proven experience as a data engineer, software developer, or similar of at least 3 years.
● Bachelor's / Master’s degree in data engineering, big data analytics, computer engineering, or related field.
● Experience with big data tools: Hadoop, Spark, Kafka, etc.
● Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
● Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
● Experience with Azure, AWS cloud services: EC2, EMR, RDS, Redshift
● Experience with BigQuery
● Experience with stream-processing systems: Storm, Spark-Streaming, etc.
● Experience with languages: Python, Java, C++, Scala, SQL, R, etc.
● Good hands-on with Hive, Presto.
About Red.Health
About us: RED.HEALTH is a leading emergency response company that operates across India. With a presence in over 550 cities, we provide a range of healthcare solutions to make emergency care accessible for everyone. With a fleet of 5000+ ambulances and more than 200 hospital partners, we provide assistance to over 150 enterprises for emergency care & health services in the country.
We have national tie-ups with Apollo, Care, Narayana Health, Manipal, KIIMS, AIG & Sterling hospital & some of our key clients are PepsiCo, Amazon, American Express, Microsoft, Goldman Sachs, Deutsche Bank, Flipkart, Capgemini & DRL.
Our journey started in 2016 as StanPlus, a comprehensive emergency response company. Transformed to RED.HEALTH today we are a one-stop solution catering to the Healthcare needs of our country. We have exponentially enhanced our capabilities by adding new verticals. These verticals include RED Ambulances, RED Academy, RED Air Guardian, Asth.RED, Hospital & Customized Enterprise Solutions, RED Priority Clinics, RED Assist, and RED Edge. Each vertical is designed to provide specialized services and solutions covering A to Z healthcare needs & emergency situations.
The company's commitment to quality care and accessibility is reflected in its core values: Empathy, Speed, Reliability, and frugality. Our vision is to be the go-to resource for emergency care in India, and we are already making a significant impact in the industry. We are committed to using innovative approaches and cutting-edge technology to provide the best possible care to our patients. The dedication, hard work, and passion for saving lives are what make us the company it is today.
leading emergency response company that operates across India. With a presence in over 550 cities, we provide a range of healthcare solutions to make emergency care accessible for everyone. With a fleet of 5000+ ambulances and more than 200 hospital partners, we provide assistance to over 150 enterprises for emergency care & health services in the country.
Similar jobs
- Mandatory - Hands on experience in Python and PySpark.
- Build pySpark applications using Spark Dataframes in Python using Jupyter notebook and PyCharm(IDE).
- Worked on optimizing spark jobs that processes huge volumes of data.
- Hands on experience in version control tools like Git.
- Worked on Amazon’s Analytics services like Amazon EMR, Lambda function etc
- Worked on Amazon’s Compute services like Amazon Lambda, Amazon EC2 and Amazon’s Storage service like S3 and few other services like SNS.
- Experience/knowledge of bash/shell scripting will be a plus.
- Experience in working with fixed width, delimited , multi record file formats etc.
- Hands on experience in tools like Jenkins to build, test and deploy the applications
- Awareness of Devops concepts and be able to work in an automated release pipeline environment.
- Excellent debugging skills.
About the role:
Hopscotch is looking for a passionate Data Engineer to join our team. You will work closely with other teams like data analytics, marketing, data science and individual product teams to specify, validate, prototype, scale, and deploy data pipelines features and data architecture.
Here’s what will be expected out of you:
➢ Ability to work in a fast-paced startup mindset. Should be able to manage all aspects of data extraction transfer and load activities.
➢ Develop data pipelines that make data available across platforms.
➢ Should be comfortable in executing ETL (Extract, Transform and Load) processes which include data ingestion, data cleaning and curation into a data warehouse, database, or data platform.
➢ Work on various aspects of the AI/ML ecosystem – data modeling, data and ML pipelines.
➢ Work closely with Devops and senior Architect to come up with scalable system and model architectures for enabling real-time and batch services.
What we want:
➢ 5+ years of experience as a data engineer or data scientist with a focus on data engineering and ETL jobs.
➢ Well versed with the concept of Data warehousing, Data Modelling and/or Data Analysis.
➢ Experience using & building pipelines and performing ETL with industry-standard best practices on Redshift (more than 2+ years).
➢ Ability to troubleshoot and solve performance issues with data ingestion, data processing & query execution on Redshift.
➢ Good understanding of orchestration tools like Airflow.
➢ Strong Python and SQL coding skills.
➢ Strong Experience in distributed systems like spark.
➢ Experience with AWS Data and ML Technologies (AWS Glue,MWAA, Data Pipeline,EMR,Athena, Redshift,Lambda etc).
➢ Solid hands on with various data extraction techniques like CDC or Time/batch based and the related tools (Debezium, AWS DMS, Kafka Connect, etc) for near real time and batch data extraction.
Note :
Product based companies, Ecommerce companies is added advantage
You will:
- Create highly scalable AWS micro-services utilizing cutting edge cloud technologies.
- Design and develop Big Data pipelines handling huge geospatial data.
- Bring clarity to large complex technical challenges.
- Collaborate with Engineering leadership to help drive technical strategy.
- Project scoping, planning and estimation.
- Mentor and coach team members at different levels of experience.
- Participate in peer code reviews and technical meetings.
- Cultivate a culture of engineering excellence.
- Seek, implement and adhere to standards, frameworks and best practices in the industry.
- Participate in on-call rotation.
You have:
- Bachelor’s/Master’s degree in computer science, computer engineering or relevant field.
- 5+ years of experience in software design, architecture and development.
- 5+ years of experience using object-oriented languages (Java, Python).
- Strong experience with Big Data technologies like Hadoop, Spark, Map Reduce, Kafka, etc.
- Strong experience in working with different AWS technologies.
- Excellent competencies in data structures & algorithms.
Nice to have:
- Proven track record of delivering large scale projects, and an ability to break down large tasks into smaller deliverable chunks
- Experience in developing high throughput low latency backend services
- Affinity to spatial data structures and algorithms.
- Familiarity with Postgres DB, Google Places or Mapbox APIs
What we offer
At GroundTruth, we want our employees to be comfortable with their benefits so they can focus on doing the work they love.
- Unlimited Paid Time Off
- In Office Daily Catered Lunch
- Fully stocked snacks/beverages
- 401(k) employer match
- Health coverage including medical, dental, vision and option for HSA or FSA
- Generous parental leave
- Company-wide DEIB Committee
- Inclusion Academy Seminars
- Wellness/Gym Reimbursement
- Pet Expense Reimbursement
- Company-wide Volunteer Day
- Education reimbursement program
- Cell phone reimbursement
- Equity Analysis to ensure fair pay
About Kloud9:
Kloud9 exists with the sole purpose of providing cloud expertise to the retail industry. Our team of cloud architects, engineers and developers help retailers launch a successful cloud initiative so you can quickly realise the benefits of cloud technology. Our standardised, proven cloud adoption methodologies reduce the cloud adoption time and effort so you can directly benefit from lower migration costs.
Kloud9 was founded with the vision of bridging the gap between E-commerce and cloud. The E-commerce of any industry is limiting and poses a huge challenge in terms of the finances spent on physical data structures.
At Kloud9, we know migrating to the cloud is the single most significant technology shift your company faces today. We are your trusted advisors in transformation and are determined to build a deep partnership along the way. Our cloud and retail experts will ease your transition to the cloud.
Our sole focus is to provide cloud expertise to retail industry giving our clients the empowerment that will take their business to the next level. Our team of proficient architects, engineers and developers have been designing, building and implementing solutions for retailers for an average of more than 20 years.
We are a cloud vendor that is both platform and technology independent. Our vendor independence not just provides us with a unique perspective into the cloud market but also ensures that we deliver the cloud solutions available that best meet our clients' requirements.
What we are looking for:
● 3+ years’ experience developing Data & Analytic solutions
● Experience building data lake solutions leveraging one or more of the following AWS, EMR, S3, Hive& Spark
● Experience with relational SQL
● Experience with scripting languages such as Shell, Python
● Experience with source control tools such as GitHub and related dev process
● Experience with workflow scheduling tools such as Airflow
● In-depth knowledge of scalable cloud
● Has a passion for data solutions
● Strong understanding of data structures and algorithms
● Strong understanding of solution and technical design
● Has a strong problem-solving and analytical mindset
● Experience working with Agile Teams.
● Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders
● Able to quickly pick up new programming languages, technologies, and frameworks
● Bachelor’s Degree in computer science
Why Explore a Career at Kloud9:
With job opportunities in prime locations of US, London, Poland and Bengaluru, we help build your career paths in cutting edge technologies of AI, Machine Learning and Data Science. Be part of an inclusive and diverse workforce that's changing the face of retail technology with their creativity and innovative solutions. Our vested interest in our employees translates to deliver the best products and solutions to our customers.
good exposure to concepts and/or technology across the broader spectrum. Enterprise Risk Technology
covers a variety of existing systems and green-field projects.
A Full stack Hadoop development experience with Scala development
A Full stack Java development experience covering Core Java (including JDK 1.8) and good understanding
of design patterns.
Requirements:-
• Strong hands-on development in Java technologies.
• Strong hands-on development in Hadoop technologies like Spark, Scala and experience on Avro.
• Participation in product feature design and documentation
• Requirement break-up, ownership and implantation.
• Product BAU deliveries and Level 3 production defects fixes.
Qualifications & Experience
• Degree holder in numerate subject
• Hands on Experience on Hadoop, Spark, Scala, Impala, Avro and messaging like Kafka
• Experience across a core compiled language – Java
• Proficiency in Java related frameworks like Springs, Hibernate, JPA
• Hands on experience in JDK 1.8 and strong skillset covering Collections, Multithreading with
For internal use only
For internal use only
experience working on Distributed applications.
• Strong hands-on development track record with end-to-end development cycle involvement
• Good exposure to computational concepts
• Good communication and interpersonal skills
• Working knowledge of risk and derivatives pricing (optional)
• Proficiency in SQL (PL/SQL), data modelling.
• Understanding of Hadoop architecture and Scala program language is a good to have.
Develop complex queries, pipelines and software programs to solve analytics and data mining problems
Interact with other data scientists, product managers, and engineers to understand business problems, technical requirements to deliver predictive and smart data solutions
Prototype new applications or data systems
Lead data investigations to troubleshoot data issues that arise along the data pipelines
Collaborate with different product owners to incorporate data science solutions
Maintain and improve data science platform
Must Have
BS/MS/PhD in Computer Science, Electrical Engineering or related disciplines
Strong fundamentals: data structures, algorithms, database
5+ years of software industry experience with 2+ years in analytics, data mining, and/or data warehouse
Fluency with Python
Experience developing web services using REST approaches.
Proficiency with SQL/Unix/Shell
Experience in DevOps (CI/CD, Docker, Kubernetes)
Self-driven, challenge-loving, detail oriented, teamwork spirit, excellent communication skills, ability to multi-task and manage expectations
Preferred
Industry experience with big data processing technologies such as Spark and Kafka
Experience with machine learning algorithms and/or R a plus
Experience in Java/Scala a plus
Experience with any MPP analytics engines like Vertica
Experience with data integration tools like Pentaho/SAP Analytics Cloud
We are looking for an engineer with ML/DL background.
Ideal candidate should have the following skillset
1) Python
2) Tensorflow
3) Experience building and deploying systems
4) Experience with Theano/Torch/Caffe/Keras all useful
5) Experience Data warehousing/storage/management would be a plus
6) Experience writing production software would be a plus
7) Ideal candidate should have developed their own DL architechtures apart from using open source architechtures.
8) Ideal candidate would have extensive experience with computer vision applications
Candidates would be responsible for building Deep Learning models to solve specific problems. Workflow would look as follows:
1) Define Problem Statement (input -> output)
2) Preprocess Data
3) Build DL model
4) Test on different datasets using Transfer Learning
5) Parameter Tuning
6) Deployment to production
Candidate should have experience working on Deep Learning with an engineering degree from a top tier institute (preferably IIT/BITS or equivalent)
Responsibilities for Data Scientist/ NLP Engineer
Work with customers to identify opportunities for leveraging their data to drive business
solutions.
• Develop custom data models and algorithms to apply to data sets.
• Basic data cleaning and annotation for any incoming raw data.
• Use predictive modeling to increase and optimize customer experiences, revenue
generation, ad targeting and other business outcomes.
• Develop company A/B testing framework and test model quality.
• Deployment of ML model in production.
Qualifications for Junior Data Scientist/ NLP Engineer
• BS, MS in Computer Science, Engineering, or related discipline.
• 3+ Years of experience in Data Science/Machine Learning.
• Experience with programming language Python.
• Familiar with at least one database query language, such as SQL
• Knowledge of Text Classification & Clustering, Question Answering & Query Understanding,
Search Indexing & Fuzzy Matching.
• Excellent written and verbal communication skills for coordinating acrossteams.
• Willing to learn and master new technologies and techniques.
• Knowledge and experience in statistical and data mining techniques:
GLM/Regression, Random Forest, Boosting, Trees, text mining, NLP, etc.
• Experience with chatbots would be bonus but not required
Octro Inc. is looking for a Data Scientist who will support the product, leadership and marketing teams with insights gained from analyzing multiple sources of data. The ideal candidate is adept at using large data sets to find opportunities for product and process optimization and using models to test the effectiveness of different courses of action.
They must have strong experience using a variety of data mining/data analysis methods, using a variety of data tools, building and implementing models, using/creating algorithms and creating/running simulations. They must have a proven ability to drive business results with their data-based insights.
They must be comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes.
Responsibilities :
- Work with stakeholders throughout the organization to identify opportunities for leveraging company data to drive business solutions.
- Mine and analyze data from multiple databases to drive optimization and improvement of product development, marketing techniques and business strategies.
- Assess the effectiveness and accuracy of new data sources and data gathering techniques.
- Develop custom data models and algorithms to apply to data sets.
- Use predictive modelling to increase and optimize user experiences, revenue generation, ad targeting and other business outcomes.
- Develop various A/B testing frameworks and test model qualities.
- Coordinate with different functional teams to implement models and monitor outcomes.
- Develop processes and tools to monitor and analyze model performance and data accuracy.
Qualifications :
- Strong problem solving skills with an emphasis on product development and improvement.
- Advanced knowledge of SQL and its use in data gathering/cleaning.
- Experience using statistical computer languages (R, Python, etc.) to manipulate data and draw insights from large data sets.
- Experience working with and creating data architectures.
- Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks.
- Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests and proper usage, etc.) and experience with applications.
- Excellent written and verbal communication skills for coordinating across teams.