Required Python ,R
work in handling large-scale data engineering pipelines.
Excellent verbal and written communication skills.
Proficient in PowerPoint or other presentation tools.
Ability to work quickly and accurately on multiple projects.
About MedCords
Here is our story: MedCords on Your story
MedCords is India's first digital healthcare platform designed for India and developing countries. We've created the most secure and intuitive healthcare ecosystem that will revolutionise how Indians perceive healthcare.
Essentially, through our strong tech, data science and ground-operational teams, we're creating the "Aadhar for Health" in India.
MedCords started with a vision of health management easier & healthcare affordable & accessible for all whether he has a smartphone or not.
2. MedCords in Top 30 Tech Startups of 2018: https://yourstory.com/
3. Our Investors: https://www.
Similar jobs
Experience with various stream processing and batch processing tools (Kafka,
Spark etc). Programming with Python.
● Experience with relational and non-relational databases.
● Fairly good understanding of AWS (or any equivalent).
Key Responsibilities
● Design new systems and redesign existing systems to work at scale.
● Care about things like fault tolerance, durability, backups and recovery,
performance, maintainability, code simplicity etc.
● Lead a team of software engineers and help create an environment of ownership
and learning.
● Introduce best practices of software development and ensure their adoption
across the team.
● Help set and maintain coding standards for the team.
Job Description:
The data science team is responsible for solving business problems with complex data. Data complexity could be characterized in terms of volume, dimensionality and multiple touchpoints/sources. We understand the data, ask fundamental-first-principle questions, apply our analytical and machine learning skills to solve the problem in the best way possible.
Our ideal candidate
The role would be a client facing one, hence good communication skills are a must.
The candidate should have the ability to communicate complex models and analysis in a clear and precise manner.
The candidate would be responsible for:
- Comprehending business problems properly - what to predict, how to build DV, what value addition he/she is bringing to the client, etc.
- Understanding and analyzing large, complex, multi-dimensional datasets and build features relevant for business
- Understanding the math behind algorithms and choosing one over another
- Understanding approaches like stacking, ensemble and applying them correctly to increase accuracy
Desired technical requirements
- Proficiency with Python and the ability to write production-ready codes.
- Experience in pyspark, machine learning and deep learning
- Big data experience, e.g. familiarity with Spark, Hadoop, is highly preferred
- Familiarity with SQL or other databases.
About CarWale: CarWale's mission is to bring delight in car buying, we offer a bouquet of reliable tools and services to help car consumers decide on buying the right car, at the right price and from the right partner. CarWale has always strived to serve car buyers and owners in the most comprehensive and convenient way possible. We provide a platform where car buyers and owners can research, buy, sell and come together to discuss and talk about their cars.We aim to empower Indian consumers to make informed car buying and ownership decisions with exhaustive and un-biased information on cars through our expert reviews, owner reviews, detailed specifications and comparisons. We understand that a car is by and large the second-most expensive asset a consumer associates his lifestyle with! Together with CarTrade & BikeWale, we are the market leaders in the personal mobility media space.About the Team:We are a bunch of enthusiastic analysts assisting all business functions with their data needs. We deal with huge but diverse datasets to find relationships, patterns and meaningful insights. Our goal is to help drive growth across the organization by creating a data-driven culture.
We are looking for an experienced Data Scientist who likes to explore opportunities and know their way around data to build world class solutions making a real impact on the business.
Skills / Requirements –
- 3-5 years of experience working on Data Science projects
- Experience doing statistical modelling of big data sets
- Expert in Python, R language with deep knowledge of ML packages
- Expert in fetching data from SQL
- Ability to present and explain data to management
- Knowledge of AWS would be beneficial
- Demonstrate Structural and Analytical thinking
- Ability to structure and execute data science project end to end
Education –
Bachelor’s degree in a quantitative field (Maths, Statistics, Computer Science). Masters will be preferred.
Required Experience
· 3+ years of relevant technical experience as a data analyst role
· Intermediate / expert skills with SQL and basic statistics
· Experience in Advance SQL
· Python programming- Added advantage
· Strong problem solving and structuring skills
· Automation in connecting various sources to the data and representing it through various dashboards
· Excellent with Numbers and communicate data points through various reports/templates
· Ability to communicate effectively internally and outside Data Analytics team
· Proactively take up work responsibilities and take adhocs as and when needed
· Ability and desire to take ownership of and initiative for analysis; from requirements clarification to deliverable
· Strong technical communication skills; both written and verbal
· Ability to understand and articulate the "big picture" and simplify complex ideas
· Ability to identify and learn applicable new techniques independently as needed
· Must have worked with various Databases (Relational and Non-Relational) and ETL processes
· Must have experience in handling large volume and data and adhere to optimization and performance standards
· Should have the ability to analyse and provide relationship views of the data from different angles
· Must have excellent Communication skills (written and oral).
· Knowing Data Science is an added advantage
Required Skills
MYSQL, Advanced Excel, Tableau, Reporting and dashboards, MS office, VBA, Analytical skills
Preferred Experience
· Strong understanding of relational database MY SQL etc.
· Prior experience working remotely full-time
· Prior Experience working in Advance SQL
· Experience with one or more BI tools, such as Superset, Tableau etc.
· High level of logical and mathematical ability in Problem Solving
About Us:-
The fastest rising startup in the EdTech space, focussed on Engineering and Government Job Exams and with an eye to capture UPSC, PSC, and international exams. Testbook is poised to revolutionize the industry. With a registered user base of over 2.2 Crore students, more than 450 crore questions solved on the WebApp, and a knockout Android App. Testbook has raced to the front and is ideally placed to capture bigger markets.
Testbook is the perfect incubator for talent. You come, you learn, you conquer. You train under the best mentors and become an expert in your field in your own right. That being said, the flexibility in the projects you choose, how and when you work on them, what you want to add to them is respected in this startup. You are the sole master of your work.
The IIT pedigree of the co-founders has attracted some of the brightest minds in the country to Testbook. A team that is quickly swelling in ranks, it now stands at 500+ in-house employees and hundreds of remote interns and freelancers. And the number is rocketing weekly. Now is the time to join the force.
In this role you will get to:-
- Work with state-of-the-art data frameworks and technologies like Dataflow(Apache Beam), Dataproc(Apache Spark & Hadoop), Apache Kafka, Google PubSub, Apache Airflow, and others.
- You will work cross-functionally with various teams, creating solutions that deal with large volumes of data.
- You will work with the team to set and maintain standards and development practices.
- You will be a keen advocate of quality and continuous improvement.
- You will modernize the current data systems to develop Cloud-enabled Data and Analytics solutions
- Drive the development of cloud-based data lake, hybrid data warehouses & business intelligence platforms
- Improve upon the data ingestion models, ETL jobs, and alerts to maintain data integrity and data availability
- Build Data Pipelines to ingest structured and Unstructured Data.
- Gain hands-on experience with new data platforms and programming languages
- Analyze and provide data-supported recommendations to improve product performance and customer acquisition
- Design, Build and Support resilient production-grade applications and web services
Who you are:-
- 1+ years of work experience in Software Engineering and development.
- Very strong understanding of Python & pandas library.Good understanding of Scala, R, and other related languages
- Experience with data transformation & data analytics in both batch & streaming mode using cloud-native technologies.
- Strong experience with the big data technologies like Hadoop, Spark, BigQuery, DataProc, Dataflow
- Strong analytical and communication skills.
- Experience working with large, disconnected, and/or unstructured datasets.
- Experience building and optimizing data pipelines, architectures, and data sets using cloud-native technologies.
- Hands-on experience with any cloud tech like GCP/AWS is a plus.
Job Description
We are looking for an experienced engineer with superb technical skills. Primarily be responsible for architecting and building large scale data pipelines that delivers AI and Analytical solutions to our customers. The right candidate will enthusiastically take ownership in developing and managing a continuously improving, robust, scalable software solutions.
Although your primary responsibilities will be around back-end work, we prize individuals who are willing to step in and contribute to other areas including automation, tooling, and management applications. Experience with or desire to learn Machine Learning a plus.
Skills
- Bachelors/Masters/Phd in CS or equivalent industry experience
- Demonstrated expertise of building and shipping cloud native applications
- 5+ years of industry experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka Streams, Py Spark, and streaming databases like druid or equivalent like Hive
- Strong industry expertise with containerization technologies including kubernetes (EKS/AKS), Kubeflow
- Experience with cloud platform services such as AWS, Azure or GCP especially with EKS, Managed Kafka
- 5+ Industry experience in python
- Experience with popular modern web frameworks such as Spring boot, Play framework, or Django
- Experience with scripting languages. Python experience highly desirable. Experience in API development using Swagger
- Implementing automated testing platforms and unit tests
- Proficient understanding of code versioning tools, such as Git
- Familiarity with continuous integration, Jenkins
Responsibilities
- Architect, Design and Implement Large scale data processing pipelines using Kafka Streams, PySpark, Fluentd and Druid
- Create custom Operators for Kubernetes, Kubeflow
- Develop data ingestion processes and ETLs
- Assist in dev ops operations
- Design and Implement APIs
- Identify performance bottlenecks and bugs, and devise solutions to these problems
- Help maintain code quality, organization, and documentation
- Communicate with stakeholders regarding various aspects of solution.
- Mentor team members on best practices
Must Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skills
Work Location : Chennai
Experience Level : 5+yrs
Package : Upto 18 LPA
Notice Period : Immediate Joiners
It's a full-time opportunity with our client.
Mandatory Skills:Machine Learning,Python,Tableau & SQL
Job Requirements:
--2+ years of industry experience in predictive modeling, data science, and Analysis.
--Experience with ML models including but not limited to Regression, Random Forests, XGBoost.
--Experience in an ML engineer or data scientist role building and deploying ML models or hands on experience developing deep learning models.
--Experience writing code in Python and SQL with documentation for reproducibility.
--Strong Proficiency in Tableau.
--Experience handling big datasets, diving into data to discover hidden patterns, using data visualization tools, writing SQL.
--Experience writing and speaking about technical concepts to business, technical, and lay audiences and giving data-driven presentations.
--AWS Sagemaker experience is a plus not required.
About WheelsEye :
Logistics in India is a complex business - layered with multiple stakeholders, unorganized, primarily offline, and with many trivial yet deep-rooted problems. Though this industry contributes 14% to the GDP, its problems have gone unattended and ignored, until now.
WheelsEye is a logistics company, building a digital infrastructure around fleet owners. Currently, we offer solutions to empower truck fleet owners. Our proprietary software & hardware solutions help automate operations, secure fleet, save costs, improve on-time performance, and streamline their business.
Why WheelsEye?
- Work on a real Indian problem of scale impact lives of 5.5 cr fleet owners, drivers and their families in a meaningful way
- Different from current market players, heavily focused and built around truck owners Problem solving and learning-oriented organization
- Audacious goals, high speed, and action orientation
- Opportunity to scale the organization across the country
- Opportunity to build and execute the culture
- Contribute to and become a part of the action plan for building the tech, finance, and service infrastructure for the logistics industry It's Tough!
Requirements:
- Bachelor’s degree with additional 2-5 years experience in analytics domain
- Experience in articulating and translating business questions and using statistical techniques to arrive at an answer using available data
- Proficient with scripting and/or programming language, e.g. Python, R(Optional), Advanced SQL; advanced knowledge of data processing, database programming and data analytics tools and techniques
- Extensive background in data mining, modelling and statistical analysis; able to understand various data structures and common methods in data transformation e.g. Linear and logistic regression, clustering, decision trees etc.
- Working knowledge of tools like Mixpanel, Metabase, Google sheets, Google BigQuery & Data studio is preferred
- Ability to self-start and self-directed work in a fast-paced environment
If you are willing to work on solving real world problems for truck owners, Join us!
Intro
Our data and risk team is the core pillar of our business that harnesses alternative data sources to guide the decisions we make at Rely. The team designs, architects, as well as develop and maintain a scalable data platform the powers our machine learning models. Be part of a team that will help millions of consumers across Asia, to be effortlessly in control of their spending and make better decisions.
What will you do
The data engineer is focused on making data correct and accessible, and building scalable systems to access/process it. Another major responsibility is helping AI/ML Engineers write better code.
• Optimize and automate ingestion processes for a variety of data sources such as: click stream, transactional and many other sources.
- Create and maintain optimal data pipeline architecture and ETL processes
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Develop data pipeline and infrastructure to support real-time decisions
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data' technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs.
What will you need
• 2+ hands-on experience building and implementation of large scale production pipeline and Data Warehouse
• Experience dealing with large scale
- Proficiency in writing and debugging complex SQLs
- Experience working with AWS big data tools
• Ability to lead the project and implement best data practises and technology
Data Pipelining
- Strong command in building & optimizing data pipelines, architectures and data sets
- Strong command on relational SQL & noSQL databases including Postgres
- Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
Big Data: Strong experience in big data tools & applications
- Tools: Hadoop, Spark, HDFS etc
- AWS cloud services: EC2, EMR, RDS, Redshift
- Stream-processing systems: Storm, Spark-Streaming, Flink etc.
- Message queuing: RabbitMQ, Spark etc
Software Development & Debugging
- Strong experience in object-oriented programming/object function scripting languages: Python, Java, C++, Scala, etc
- Strong hold on data structures & algorithms
What would be a bonus
- Prior experience working in a fast-growth Startup
- Prior experience in the payments, fraud, lending, advertising companies dealing with large scale data