Responsibilities
- Design experiments, test hypotheses, and build models utilizing the traditional datasets and graph data.
- Apply advanced statistical and predictive modeling techniques to build, maintain, and improve on multiple real-time decision systems.
- Identify what data is available and relevant, including internal and external data sources, leveraging new data collection processes such as geo-location or social media
- Utilize patterns and variations in the volume, speed and other characteristics of data for predictive analysis.
- Define the preprocessing or feature engineering to be done on a given dataset, data augmentation pipelines, training models and tuning their hyperparameters, analyzing the errors of the model and designing strategies to overcome them
- Selecting features, building and optimizing classifiers using machine learning techniques
- Extending the company’s data with third party sources of information when needed
- Creating automated anomaly detection systems and constant tracking of its performance
Skills and Qualifications
- Bachelors in mathematics, statistics or computer science or a related field; Masters or PHD degree preferred.
- Experience with one or two of the following: Deep Learning methods, NLP, computer vision, sentiment analysis, topic modeling and graph theory and databases
- Experience with common data science tools such as Python, R, PyTorch, TensorFlow, Keras, NLTK, Spacy, or Neo4j, and a good understanding of modelling platforms (Azure AutoML, SageMaker, DataBricks, DataRobot, and H2O.ai)
- Experience working with big data distributed programming languages, and ecosystems: Spark, Hadoop, MapReduce, Pig, Kafka
- Familiarity with Cloud-based environments such as AWS (S3/EC2), Azure, Google Cloud
- Experience with building and deploying predictive and prescriptive analytics models
- Ability to come up with solutions to loosely defined business problems by leveraging pattern detection over potentially large datasets.
- Demonstrable ability to quickly understand new concepts-all the way down to the theorems- and to come out with original solutions to mathematical issues.
- Strong communication and interpersonal skills
About PayU
Similar jobs
With 30B+ medical and pharmacy claims covering 300M+ US patients, Compile Data helps life science companies generate actionable insights across different stages of a drug's lifecycle. Through context driven record-linking and machine-learning algorithms, Compile's platform transforms messy and disparate datasets into an intuitive graph of healthcare providers and all their activities.
Responsibilities:
- Help build intelligent systems to cleanse and record-link healthcare data from over 200 sources
- Build tools and ML modules to generate insights from hard to analyse healthcare data, and help solve various business needs of large pharma companies
- Mentoring and growing a data science team
Requirements:
- 2-3 years of experience in building ML models, preferably in healthcare
- Worked with NN and ML algorithms, solved problems using panel and transactional data
- Experience working on record-linking problems and NLP approaches towards text normalization and standardization is a huge plus
- Proven experience as an ML Lead, worked in Python or R; with experience in developing big-data ML solutions at scale and integration with production software systems
- Ability to craft context around key business requirements and present ideas in business and user friendly language
Using automated tools to extract data from primary and secondary sources
Removing corrupted data and fixing coding errors and related problems
Developing and maintaining databases, data systems – reorganizing data in a
readable format
Performing analysis to assess quality and meaning of data
Using statistical tools to identify, analyse, and interpret patterns and trends in
complex data sets that could be helpful for the diagnosis and prediction
Data analysis support to essential business functions so that business performance
can be assessed and compared over periods of time by creating essential business
dashboards
Preparing reports for the management stating trends, patterns, and predictions
using relevant data and generating meaningful insights from the data to support
business decision making
Working with programmers, engineers, and management heads to identify process
improvement opportunities, propose system modifications, and devise data
governance strategies
Preparing final analysis reports for the stakeholders to understand the data-analysis
steps, enabling them to take important decisions based on various facts and trends.
Skills Required
A successful analytics lead needs to have a combination of technical, management as
well leadership skills
A background in Mathematics, Statistics, Computer Science, Information
Management, or Economics can serve as a solid foundation to build your career in
analytics
Bachelor’s degree required, post-graduation is preferred with 5+ years of experience
in analytics field
Knowledge of programming languages like SQL, Oracle, R, MATLAB, and Python
Comfort with management data reporting tools like MS Excel, Google sheets
Technical proficiency regarding database design development, data
models, techniques for data mining, and segmentation
Experience in handling reporting packages like Business Objects, programming
( Javascript , XML, or ETL frameworks), databases
Knowledge of data visualization software like Tableau
Knowledge of how to create and apply the most accurate algorithms to datasets in
order to find solutions
Problem-solving skills
Accuracy and attention to detail
Adept at queries, writing reports, and making presentations
Team-working skills
Verbal and Written communication skills
Proven working experience in data analysis
Survey Analytics
at Leading Management Consulting Firm
We are looking for candidates who have demonstrated both a strong business sense and deep understanding of the quantitative foundations of modelling.
• Excellent analytical and problem-solving skills, including the ability to disaggregate issues, identify root causes and recommend solutions
• Statistical programming software experience in SPSS and comfortable working with large data sets.
• R, Python, SAS & SQL are preferred but not a mandate
• Excellent time management skills
• Good written and verbal communication skills; understanding of both written and spoken English
• Strong interpersonal skills
• Ability to act autonomously, bringing structure and organization to work
• Creative and action-oriented mindset
• Ability to interact in a fluid, demanding and unstructured environment where priorities evolve constantly, and methodologies are regularly challenged
• Ability to work under pressure and deliver on tight deadlines
Qualifications and Experience:
• Graduate degree in: Statistics/Economics/Econometrics/Computer
Science/Engineering/Mathematics/MBA (with a strong quantitative background) or
equivalent
• Strong track record work experience in the field of business intelligence, market
research, and/or Advanced Analytics
• Knowledge of data collection methods (focus groups, surveys, etc.)
• Knowledge of statistical packages (SPSS, SAS, R, Python, or similar), databases,
and MS Office (Excel, PowerPoint, Word)
• Strong analytical and critical thinking skills
• Industry experience in Consumer Experience/Healthcare a plus
About Drip Capital & Tech Team
The engineering team at Drip Capital is responsible for building and maintaining the online global trade financing platform that supports the interactions between buyers, sellers, financing partners, insurance agents, global retail partners, trade agents, shipping & transportation companies, supply chain, and warehousing companies worldwide.
Our primary goal is to ensure that customers are provided time-critical capital and at the same time balance requirements related to risk, fraud management, and compliance. The services are accessed by customers worldwide and hence the engineering systems need to be policy-driven, easily reconfigurable, and able to handle multiple regional languages. We use machine learning for risk classifications/predictions, intelligent document parsing subsystems, robotic process automation, REST APIs to connect our microservices, and a cloud-based data lake and warehouse for data storage and analysis.
Our team comprises talent from top-tier institutions including Wharton, Stanford, and IITs with years of experience at companies like Google, Amazon, Standard Chartered, Blackrock, and Yahoo. We are backed by leading Silicon Valley investors - Sequoia, Wing, Accel, and Y Combinator. We are a global company headquartered in Silicon Valley along with offices in India and Mexico.
Your Role
- Partner with multiple people like Analysts, Data Scientists, Product Managers, Leadership, etc to understand their data needs.
- Design, build and launch complex data pipelines that move data from multiple sources like MySQL, MongoDB, S3, etc
- Design and maintain DataWarehouse and Data Lake solutions
- Build data expertise and own data quality for your areas
Our Checklist
- 2+ years of experience in any scripting language like Python
- Very good knowledge of SQL
- Strong problem solving and communication skills
- Process-oriented with great documentation skills
- Collaborative team spirit
Good to have
- Previous experience setting up custom ETL pipelines
- Knowledge of NoSQL
- Experience working with data warehousing tools like AWS Redshift or Google BigQuery.
If you love building scalable, high-performance, reliable distributed systems and want to work with people who feel the same way you do, let's talk!
Key Responsibilities : ( Data Developer Python, Spark)
Exp : 2 to 9 Yrs
Development of data platforms, integration frameworks, processes, and code.
Develop and deliver APIs in Python or Scala for Business Intelligence applications build using a range of web languages
Develop comprehensive automated tests for features via end-to-end integration tests, performance tests, acceptance tests and unit tests.
Elaborate stories in a collaborative agile environment (SCRUM or Kanban)
Familiarity with cloud platforms like GCP, AWS or Azure.
Experience with large data volumes.
Familiarity with writing rest-based services.
Experience with distributed processing and systems
Experience with Hadoop / Spark toolsets
Experience with relational database management systems (RDBMS)
Experience with Data Flow development
Knowledge of Agile and associated development techniques including:
n
About Us:
We are a VC-funded startup solving one of the biggest transportation problems India faces. Most passengers in India travel long distance by IRCTC trains. At time of booking, approx 1 out of every 2 passengers end up with a Waitlisted or RAC ticket. This creates a lot of anxiety for passengers, as Railway only announces only 4 hour before departure if they have a confirmed seat. We solve this problem through our Waitlist & RAC Protection. Protection can be bought against each IRCTC ticket at time of booking. If train ticket is not confirmed, we fly the passenger to the destination. Our team consists of 3 Founders from IIT, IIM and ISB.
Functional Experience:
- Computer Science or IT Engineering background with solid understanding of basics of Data Structures and Algorithms
- 2+ years of data science experience working with large datasets
- Expertise in Python packages like pandas, numPy, sklearn, matplotlib, seaborn, keras and tensorflow
- Expertise in Big Data technologies like Hadoop, Cassandra and PostgreSQL
- Expertise in Cloud computing on AWS with EC2, AutoML, Lambda and RDS
- Good knowledge of Machine Learning and Statistical time series analysis (optional)
- Unparalleled logical ability making you the go to guy for all things related to data
- You love coding like a hobby and are up for a challenge!
Cultural:
- Assume a strong sense of ownership of analytics : Design, develop & deploy
- Collaborate with senior management, operations & business team
- Ensure Quality & sustainability of the architecture
- Motivation to join an early stage startup should go beyond compensation
Fresher Data Engineer- Python+SQL (Internship+Job Opportunity)
3+ years of experience in deployment, monitoring, tuning, and administration of high concurrency MySQL production databases.
- Solid understanding of writing optimized SQL queries on MySQL databases
- Understanding of AWS, VPC, networking, security groups, IAM, and roles.
- Expertise in scripting in Python or Shell/Powershell
- Must have experience in large scale data migrations
- Excellent communication skills.
Manager - Digital Analytics
at DSP Investment Managers Pvt Ltd
About DSP e-business Division
The e-Business division at DSP is a specialist in-house team that is working to take advantage of the changing internet & mobile landscape in India that’s resulting in a growing preference towards online commerce. We are working to bring-in a refreshing approach to super-simplify investing in Mutual Funds.
Analytics, Automation, Design, and Device-agnosticism & Simplicity are at the heart of our e-business strategy. Our products, IFAXpress , the B2C transaction portal of DSP web , our android app, our iOS app & our investing decision tool has demonstrated what we intend to do going forward.
What is the role's objective?
This role will own setting up analytics around our digital products and creating data & analytics assets which will enable Insights & recommendation for Digital Products Owners, Marketing and Management.
- The opportunity we have is to capture the insights from the huge data that flow from customer visit, interaction with different features in our website and their behavior pattern.
- Expertise in handling Data from Digital analytics platforms & specialized tools
- Real time tracking of visitor actions, setup triggers, segmentation & KPI measurement to enable changes in users/investors journey
- Work closely in partnership with Digital Product Owners and Marketing teams
What skills do you need to possess?
- Proficient in handling digital data – millions of rows – which gets created with actions of users on website/app
- Proficient in any suite of Digital tools and CDP like – Adobe Suite - site catalyst, experience manager, Onmiture, Google 360, Lemnisk, Webengage …
- Adobe Expertise, Programming - Java, Python, HTML, CSS, JavaScript, Jquery
- Experience in Tag implementation, measurement and optimization
- Setting up dashboard to track and measure various digital KPIs
- Exposure to cloud platform like AWS or GCP or big data technologies – HDFS, Kafka
- Good to have domain experience and exposure of creating insight pack for senior management