Key Responsibilities : ( Data Developer Python, Spark)
Exp : 2 to 9 Yrs
Development of data platforms, integration frameworks, processes, and code.
Develop and deliver APIs in Python or Scala for Business Intelligence applications build using a range of web languages
Develop comprehensive automated tests for features via end-to-end integration tests, performance tests, acceptance tests and unit tests.
Elaborate stories in a collaborative agile environment (SCRUM or Kanban)
Familiarity with cloud platforms like GCP, AWS or Azure.
Experience with large data volumes.
Familiarity with writing rest-based services.
Experience with distributed processing and systems
Experience with Hadoop / Spark toolsets
Experience with relational database management systems (RDBMS)
Experience with Data Flow development
Knowledge of Agile and associated development techniques including:
n
Similar jobs
About Kapiva:
Kapiva is a modern ayurvedic nutrition brand focused on bringing selectively sourced, natural foods to Indian consumers. Inculcating the wisdom of India's ancient food traditions, Kapiva's high-quality product range includes herbal juices, nutrition powders, ayurvedic gummies, healthy staples, and much more. Our products are top performers on online marketplaces such as Amazon, Flipkart, Big Basket and we're growing our presence offline in a big way (Nature’s Basket, Reliance Retail, Noble Plus, etc). We’re also funded by India’s best Consumer VC Fund – Fireside Ventures.
About the role:
We are looking for a motivated data analyst with sound experience in handling web/ digital analytics, to join us as part of the Kapiva D2C Business Team. This team is primarily responsible for driving sales and customer engagement on our website (www.kapiva.in). This channel has grown 5x in revenue over the last 12 months and is poised to grow another 5x over the next six. It represents a high-growth, important part of Kapiva’s overall e-commerce growth strategy.
The mandate here is to run an end-to-end sustainable e-commerce business, boost sales through marketing campaigns, and build a cutting edge product (website) that optimizes the customer’s journey as well as increases customer lifetime value.
The Data Analyst will support the business heads by providing data-backed insights in order to drive customer growth, retention and engagement. They will be required to set-up and manage reports, test various hypotheses and coordinate with various stakeholders on a day-to-day basis.
Location: Bangalore
Job Responsibilities:
Strategy and planning:
- Work with the D2C functional leads and support analytics planning on a quarterly/ annual basis
- Identify reports and analytics needed to be conducted on a daily/ weekly/ monthly frequency
- Drive planning for hypothesis-led testing of key metrics across the customer funnel
Analytics:
- Interpret data, analyze results using statistical techniques and provide ongoing reports
- Analyze large amounts of information to discover trends and patterns
- Work with business teams to prioritize business and information needs
- Collaborate with engineering and product development teams to setup data infrastructure as needed
Reporting and communication:
- Prepare reports / presentations to present actionable insights that can drive business objectives
- Setup live dashboards reporting key cross-functional metrics
- Coordinate with various stakeholders to collect useful and required data
- Present findings to business stakeholders to drive action across the organization
- Propose solutions and strategies to business challenges
Requirements sought:
Must-haves:
- Bachelor’s/ Masters in Mathematics, Economics, Computer Science, Information Management, Statistics or related field
- 0-2 years’ experience in an analytics role, preferably consumer business. Proven experience as a Data Analyst/ Data Scientist
- High proficiency in MS Excel and SQL
- Knowledge of one or more programming languages like Python/ R. Adept at queries, report writing and presenting findings
- Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy - working knowledge of statistics and statistical methods
- Ability to work in a highly dynamic environment across cross-functional teams; good at coordinating with different departments and managing timelines
- Exceptional English written/verbal communication
- A penchant for understanding consumer traits and behavior and a keen eye to detail
Good to have:
- Hands-on experience with one or more web analytics tools like Google Analytics, Mixpanel, Kissmetrics, Heap, Adobe Analytics, etc.
- Experience in using business intelligence tools like Metabase, Tableau, Power BI is a plus
- Experience in developing predictive models and machine learning algorithms
Machine Learning Engineer
at IDfy
● The machine learning team is a self-contained team of 9 people responsible for building models and services that support key workflows for IDfy.
● Our models are gating criteria for these workflows and as such are expected to perform accurately and quickly. We use a mix of conventional and hand-crafted deep learning models.
● The team comes from diverse backgrounds and experiences. We have ex-bankers, startup founders, IIT-ians, and more.
● We work directly with business and product teams to craft solutions for our customers. We know that we are, and function as a platform and not a services company.
● Be working on all aspects of a production machine learning system. You will be acquiring data, training and building models, deploying models, building API services for exposing these models, maintaining them in production, and more.
● Work on performance tuning of models
● From time to time work on support and debugging of these production systems
● Work on researching the latest technology in the areas of our interest and applying it to build newer products and enhancement of the existing platform.
● Building workflows for training and production systems
● Contribute to documentation
About you
● You are an early-career machine learning engineer (or data scientist). Our ideal candidate is
someone with 1-3 years of experience in data science.
Must Haves
● You have a good understanding of Python and Scikit-learn, Tensorflow, or Pytorch. Our systems are built with these tools/language and we expect a strong base in these.
● You are proficient at exploratory analysis and know which model to use in most scenarios
● You should have worked on framing and solving problems with the application of machine learning or deep learning models.
● You have some experience in building and delivering complete or part AI solutions
● You appreciate that the role of the Machine Learning engineer is not only modeling, but also building product solutions and you strive towards this.
● Enthusiasm and drive to learn and assimilate the state of art research. A lot of what we are building will require innovative approaches using newly researched models and applications.
Good to Have
● Knowledge of and experience in computer vision. While a large part of our work revolves around computer
vision, we believe this is something you can learn on the job.
● We build our own services, hence we would want you to have some knowledge of writing APIs.
● Our stack also includes languages like Ruby, Go, and Elixir. We would love it if you know any of these or take an interest in functional programming.
● Knowledge of and experience in ML Ops and tooling would be a welcome addition. We use Docker and Kubernetes for deploying our services.
We cater to a wide range of entertainment categories including video streaming, music streaming, games and short videos via our MX Player and MX Takatak apps which are our flagship products.
Both MX Player and MX Takatak iOS apps are frequently featured amongst the top 5 apps in the Entertainment category on the Indian App Store. These are built by a small team of engineers based in Mumbai.
Roles and responsibility for the same will be:
- Year of experience – 5+ years
- Ability to synthesize complex data into actionable goals.
- Interpersonal skills to work collaboratively with various stakeholders with competing interests.
- Find analytical trends and logics to better promote content on the Platform
- Evaluate content and carry out qualitative research on Competition Analysis, Viewership trends for OTT.
- Building hypothesis and testing it out through relevant KPIs.
- Communicate the key findings to programming stakeholders and inculcating best practices to programming team.
Skills & Competencies:
- Good knowledge of BigQuery, SQL, Python/R, MS Excel and Data Infrastructure.
- Ability to work as a team player in a target driven work environment meeting deadline.
- Excellent Time Management Skills.
- Strong Communication Skills.
- Proficient in MS Office Suite
- Strong Python Coding skills and OOP skills
- Should have worked on Big Data product Architecture
- Should have worked with any one of the SQL-based databases like MySQL, PostgreSQL and any one of
- NoSQL-based databases such as Cassandra, Elasticsearch etc.
- Hands on experience on frameworks like Spark RDD, DataFrame, Dataset
- Experience on development of ETL for data product
- Candidate should have working knowledge on performance optimization, optimal resource utilization, Parallelism and tuning of spark jobs
- Working knowledge on file formats: CSV, JSON, XML, PARQUET, ORC, AVRO
- Good to have working knowledge with any one of the Analytical Databases like Druid, MongoDB, Apache Hive etc.
- Experience to handle real-time data feeds (good to have working knowledge on Apache Kafka or similar tool)
- Python and Scala (Optional), Spark / PySpark, Parallel programming
- Use data to develop machine learning models that optimize decision making in Credit Risk, Fraud, Marketing, and Operations
- Implement data pipelines, new features, and algorithms that are critical to our production models
- Create scalable strategies to deploy and execute your models
- Write well designed, testable, efficient code
- Identify valuable data sources and automate collection processes.
- Undertake to preprocess of structured and unstructured data.
- Analyze large amounts of information to discover trends and patterns.
Requirements:
- 1+ years of experience in applied data science or engineering with a focus on machine learning
- Python expertise with good knowledge of machine learning libraries, tools, techniques, and frameworks (e.g. pandas, sklearn, xgboost, lightgbm, logistic regression, random forest classifier, gradient boosting regressor etc)
- strong quantitative and programming skills with a product-driven sensibility
Job Profile:
As an Applied Scientist, you will be responsible for implementing cutting edge AI/ML algorithms in Computer Vision and Natural Language Processing problems. You need to collaborate with engineers and researchers to critically analyze, provide insights and visualizations for solving complex problems in Ziroh Labs. You will train AI/ML models in Python and often integrate and deploy in Java, C++, C# etc.
Preferred Qualifications:
- Masters in Data Science, AI/ML or quantitative fields like Mathematics, Statistics, Computer Science, Physics, etc.
- Experience with Python, Tensorflow, Keras and Pytorch
- Experience in OpenCV, NLTK, GenSim, spaCy, etc.
- Mathematical understanding of AI/ML algorithms
- Contribution to Open-source projects will have higher chances
Big Data Developer
Role Summary/Purpose:
We are looking for a Developer/Senior Developers to be a part of building advanced analytical platform leveraging Big Data technologies and transform the legacy systems. This role is an exciting, fast-paced, constantly changing and challenging work environment, and will play an important role in resolving and influencing high-level decisions.
Requirements:
- The candidate must be a self-starter, who can work under general guidelines in a fast-spaced environment.
- Overall minimum of 4 to 8 year of software development experience and 2 years in Data Warehousing domain knowledge
- Must have 3 years of hands-on working knowledge on Big Data technologies such as Hadoop, Hive, Hbase, Spark, Kafka, Spark Streaming, SCALA etc…
- Excellent knowledge in SQL & Linux Shell scripting
- Bachelors/Master’s/Engineering Degree from a well-reputed university.
- Strong communication, Interpersonal, Learning and organizing skills matched with the ability to manage stress, Time, and People effectively
- Proven experience in co-ordination of many dependencies and multiple demanding stakeholders in a complex, large-scale deployment environment
- Ability to manage a diverse and challenging stakeholder community
- Diverse knowledge and experience of working on Agile Deliveries and Scrum teams.
Responsibilities
- Should works as a senior developer/individual contributor based on situations
- Should be part of SCRUM discussions and to take requirements
- Adhere to SCRUM timeline and deliver accordingly
- Participate in a team environment for the design, development and implementation
- Should take L3 activities on need basis
- Prepare Unit/SIT/UAT testcase and log the results
- Co-ordinate SIT and UAT Testing. Take feedbacks and provide necessary remediation/recommendation in time.
- Quality delivery and automation should be a top priority
- Co-ordinate change and deployment in time
- Should create healthy harmony within the team
- Owns interaction points with members of core team (e.g.BA team, Testing and business team) and any other relevant stakeholders
Deep Learning Engineer
at Nanonets
We are looking for an engineer with ML/DL background.
Ideal candidate should have the following skillset
1) Python
2) Tensorflow
3) Experience building and deploying systems
4) Experience with Theano/Torch/Caffe/Keras all useful
5) Experience Data warehousing/storage/management would be a plus
6) Experience writing production software would be a plus
7) Ideal candidate should have developed their own DL architechtures apart from using open source architechtures.
8) Ideal candidate would have extensive experience with computer vision applications
Candidates would be responsible for building Deep Learning models to solve specific problems. Workflow would look as follows:
1) Define Problem Statement (input -> output)
2) Preprocess Data
3) Build DL model
4) Test on different datasets using Transfer Learning
5) Parameter Tuning
6) Deployment to production
Candidate should have experience working on Deep Learning with an engineering degree from a top tier institute (preferably IIT/BITS or equivalent)
Artificial Intelligence Engineer
at Nactus India Services Pvt. Ltd
Nactus is at forefront of education reinvention, helping educators and learner’s community at large through innovative solutions in digital era. We are looking for an experienced AI specialist to join our revolution using the deep learning, artificial intelligence. This is an excellent opportunity to take advantage of emerging trends and technologies to a real-world difference.
Role and Responsibilities
- Manage and direct research and development (R&D) and processes to meet the needs of our AI strategy.
- Understand company and client challenges and how integrating AI capabilities can help create educational solutions.
- Analyse and explain AI and machine learning (ML) solutions while setting and maintaining high ethical standards.
Skills Required
- Knowledge of algorithms, object-oriented and functional design principles
- Demonstrated artificial intelligence, machine learning, mathematical and statistical modelling knowledge and skills.
- Well-developed programming skills – specifically in SAS or SQL and other packages with statistical and machine learning application, e.g. R, Python
- Experience with machine learning fundamentals, parallel computing and distributed systems fundamentals, or data structure fundamentals
- Experience with C, C++, or Python programming
- Experience with debugging and building AI applications.
- Robustness and productivity analyse conclusions.
- Develop a human-machine speech interface.
- Verify, evaluate, and demonstrate implemented work.
- Proven experience with ML, deep learning, Tensorflow, Python