- Understand and translate statistics and analytics to address business problems
- Responsible for helping in data preparation and data pull, which is the first step in machine learning
- Should be able to do cut and slice data to extract interesting insights from the data
- Model development for better customer engagement and retention
- Hands on experience in relevant tools like SQL(expert), Excel, R/Python
- Working on strategy development to increase business revenue
- Hands on experience in relevant tools like SQL(expert), Excel, R/Python
- Statistics: Strong knowledge of statistics
- Should able to do data scraping & Data mining
- Be self-driven, and show ability to deliver on ambiguous projects
- An ability and interest in working in a fast-paced, ambiguous and rapidly-changing environment
- Should have worked on Business Projects for an organization, Ex: customer acquisition, Customer retention.
About Angel One
We are Angel One (formerly known as Angel Broking). India's most trusted Fintech company and an all-in-One financial house. Founded in 1996 Angel One offers a world-class experience across all digital channels including web, trading software and mobile applications, to help make millions of Indians informed investment decisions.
Certified as a Great Place To Work for six-consecutive years, we are driven by technology and a mission to become the No. 1 fintech organization in India. With a 9.2 Million+ registered client base and more than 18+ million app downloads, we are onboarding more than 400,000 new users every month. We are working to build personalized financial journeys for customers via a single app, powered by new-age engineering tech and Machine Learning.
We are a group of self-driven, motivated individuals who enjoy taking ownership and believe in providing the best value for money to investors through innovative products and investment strategies. We apply and amplify design thinking with our products and solution.
It is a flat structure, with ample opportunity to showcase your talent and a growth path for engineers to the very top. We are remote-first, with people spread across Bangalore, Mumbai and UAE. Here are some of the perks that you'll enjoy as an Angelite,
- Work with world-class peer group from leading organizations
- Exciting, dynamic and agile work environment
- Freedom to ideate, innovate, express, solve and create customer experience through #Fintech & #ConsumerTech
- Cutting edge technology and Products / Digital Platforms of future
- Continuous learning interventions and upskilling
- Open culture to collaborate where failing fast is encouraged to invent new ways and methods, join our Failure Club to experience it
- 6-time certified as a Great Place To Work culture
- Highly competitive pay structures, one of the best
Come say Hello to ideas and goodbye to hierarchies at Angel One!
- Work in collaboration with the application team and integration team to design, create, and maintain optimal data pipeline architecture and data structures for Data Lake/Data Warehouse.
- Work with stakeholders including the Sales, Product, and Customer Support teams to assist with data-related technical issues and support their data analytics needs.
- Assemble large, complex data sets from third-party vendors to meet business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Elasticsearch, MongoDB, and AWS technology.
- Streamline existing and introduce enhanced reporting and analysis solutions that leverage complex data sources derived from multiple internal systems.
- 5+ years of experience in a Data Engineer role.
- Proficiency in Linux.
- Must have SQL knowledge and experience working with relational databases, query authoring (SQL) as well as familiarity with databases including Mysql, Mongo, Cassandra, and Athena.
- Must have experience with Python/Scala.
- Must have experience with Big Data technologies like Apache Spark.
- Must have experience with Apache Airflow.
- Experience with data pipeline and ETL tools like AWS Glue.
- Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
We allows customers to "buy now and pay later" for goods and services purchased online and offline portals. It's a rapidly growing organization opening up new avenues of payments for online and offline customers.
Define and continuously refine the analytics roadmap.
Build, Deploy and Maintain the data infrastructure that supports all of the analysis, including the data warehouse and various data marts
Build, deploy and maintain the predictive models and scoring infrastructure that powers critical decision management systems.
Strive to devise ways to gather more alternate data and build increasingly enhanced predictive models
Partner with business teams to systematically design experiments to continuously improve customer acquisition, minimize churn, reduce delinquency and improve profitability
Provide data insights to all business teams through automated queries, MIS, etc.
4+ years of deep, hands-on analytics experience in a management consulting, start-up or financial services, or fintech company.
Should have strong knowledge in SQL and Python.
Deep knowledge of problem-solving approach using analytical frameworks.
Deep knowledge of frameworks for data management, deployment, and monitoring of performance metrics.
Hands-on exposure to delivering improvements through test and learn methodologies.
Excellent communication and interpersonal skills, with the ability to be pleasantly persistent.
We are a growing startup in the healthcare space, our business model has been mostly unexplored and that is exciting!
Our company decisions are heavily guided by insights we get from data. So this position is key and core to our business growth. You can make a real impact. We are looking for a data scientist who is passionate about contributing to the growth of MyYogaTeacher and propelling the company to newer heights.
We encourage you to spend some time browsing through content on our website myyogateacher.com and maybe even sign up for our service and try it out!
As a Data Scientist, you’ll
- Help collect data from a variety of sources - decipher and address quality of data, filter and cleanse data, identify missing data
- Help measure, transform and organize data into readily usable formats for reporting and further analysis
- Develop and implement analytical databases and data collection systems
- Analyze data in meaningful ways. Use statistical methods and data mining algorithms to analyze data and generate useful insights and reports
- Develop recommendation engines in a variety of areas
- Identify and recommend new ways to optimize and streamline data collection processes
- Collaborate with programmers, engineers, and organizational leaders to identify opportunities for process improvements, recommend system modifications, and develop policies for data governance
You are qualified if:
- You’re a Bachelor and/or Master in Mathematics, Statistics, Computer Engineering, Data Science, Data Analytics or Data Mining
- 3+ years experience in a data analyst role
- 3+ years’ of data mining and machine learning experience
- Great understanding of databases such as MySQL, and Amazon Redshift and are very adept at SQL
- Good knowledge of No SQL databases such as Mongo DB and ClickHouse
- Knowledge of programming languages like SQL, Oracle, R, and MATLAB. Proficient in Python and Shell scripting
- Understanding of ETL framework and ETL tools
- Proficiency in statistical packages like Excel, SPSS, and SAS to be used for data set analyzing
- Knowledge of how to create and apply the most appropriate algorithms to datasets to find solutions
Would be nice if you also have
- Experience with data visualization tools such as Tableau, Business Objects, PowerBI or Qlik
- Adept at using data processing platforms like Hadoop and Apache Spark
- Experience handling unstructured data such as text, audio and video and extracting features from those
- Excellent analytical skills - the ability to identify trends, patterns and insights from data. You love numbers
- Strong attention to detail
- Great communication and presentation skills – the ability to write and speak clearly to easily communicate complex ideas in a way that is easy to understand.
- effective stakeholder management and great problem-solving skills
- Keen desire to own up to things and get things done. You follow through on commitments: live up to verbal and written agreements
- You are a quick learner of new technologies and easily adapt to change
- Ability to collaborate effectively and work as part of a team
- You follow through on commitments: Live up to verbal and written agreements, regardless of personal cost.
- Enthusiasm: exhibit passion, excitement and positive energy over work.
Here are couple of articles about us from our CEO Jitendra
Why we started MyYogaTeacher https://www.myyogateacher.com/articles/why-i-started-myyogateacher
Our mission and culture https://www.myyogateacher.com/articles/company-mission-culture
Look forward to hearing from you !
Designation – Deputy Manager - TS
- Total of 8/9 years of development experience Data Engineering . B1/BII role
- Minimum of 4/5 years in AWS Data Integrations and should be very good on Data modelling skills.
- Should be very proficient in end to end AWS Data solution design, that not only includes strong data ingestion, integrations (both Data @ rest and Data in Motion) skills but also complete DevOps knowledge.
- Should have experience in delivering at least 4 Data Warehouse or Data Lake Solutions on AWS.
- Should be very strong experience on Glue, Lambda, Data Pipeline, Step functions, RDS, CloudFormation etc.
- Strong Python skill .
- Should be an expert in Cloud design principles, Performance tuning and cost modelling. AWS certifications will have an added advantage
- Should be a team player with Excellent communication and should be able to manage his work independently with minimal or no supervision.
- Life Science & Healthcare domain background will be a plus
We are looking for a Data Engineer to join our data team to solve data-driven critical
business problems. The hire will be responsible for expanding and optimizing the existing
end-to-end architecture including the data pipeline architecture. The Data Engineer will
collaborate with software developers, database architects, data analysts, data scientists and platform team on data initiatives and will ensure optimal data delivery architecture is
consistent throughout ongoing projects. The right candidate should have hands on in
developing a hybrid set of data-pipelines depending on the business requirements.
- Develop, construct, test and maintain existing and new data-driven architectures.
- Align architecture with business requirements and provide solutions which fits best
- to solve the business problems.
- Build the infrastructure required for optimal extraction, transformation, and loading
- of data from a wide variety of data sources using SQL and Azure ‘big data’
- Data acquisition from multiple sources across the organization.
- Use programming language and tools efficiently to collate the data.
- Identify ways to improve data reliability, efficiency and quality
- Use data to discover tasks that can be automated.
- Deliver updates to stakeholders based on analytics.
- Set up practices on data reporting and continuous monitoring
Required Technical Skills
- Graduate in Computer Science or in similar quantitative area
- 1+ years of relevant work experience as a Data Engineer or in a similar role.
- Advanced SQL knowledge, Data-Modelling and experience working with relational
- databases, query authoring (SQL) as well as working familiarity with a variety of
- Experience in developing and optimizing ETL pipelines, big data pipelines, and datadriven
- Must have strong big-data core knowledge & experience in programming using Spark - Python/Scala
- Experience with orchestrating tool like Airflow or similar
- Experience with Azure Data Factory is good to have
- Build processes supporting data transformation, data structures, metadata,
- dependency and workload management.
- Experience supporting and working with cross-functional teams in a dynamic
- Good understanding of Git workflow, Test-case driven development and using CICD
- is good to have
- Good to have some understanding of Delta tables It would be advantage if the candidate also have below mentioned experience using
- the following software/tools:
- Experience with big data tools: Hadoop, Spark, Hive, etc.
- Experience with relational SQL and NoSQL databases
- Experience with cloud data services
- Experience with object-oriented/object function scripting languages: Python, Scala, etc.
Data Engineer- Senior
Cubera is a data company revolutionizing big data analytics and Adtech through data share value principles wherein the users entrust their data to us. We refine the art of understanding, processing, extracting, and evaluating the data that is entrusted to us. We are a gateway for brands to increase their lead efficiency as the world moves towards web3.
What are you going to do?
Design & Develop high performance and scalable solutions that meet the needs of our customers.
Closely work with the Product Management, Architects and cross functional teams.
Build and deploy large-scale systems in Java/Python.
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Create data tools for analytics and data scientist team members that assist them in building and optimizing their algorithms.
Follow best practices that can be adopted in Bigdata stack.
Use your engineering experience and technical skills to drive the features and mentor the engineers.
What are we looking for ( Competencies) :
Bachelor’s degree in computer science, computer engineering, or related technical discipline.
Overall 5 to 8 years of programming experience in Java, Python including object-oriented design.
Data handling frameworks: Should have a working knowledge of one or more data handling frameworks like- Hive, Spark, Storm, Flink, Beam, Airflow, Nifi etc.
Data Infrastructure: Should have experience in building, deploying and maintaining applications on popular cloud infrastructure like AWS, GCP etc.
Data Store: Must have expertise in one of general-purpose No-SQL data stores like Elasticsearch, MongoDB, Redis, RedShift, etc.
Strong sense of ownership, focus on quality, responsiveness, efficiency, and innovation.
Ability to work with distributed teams in a collaborative and productive manner.
Competitive Salary Packages and benefits.
Collaborative, lively and an upbeat work environment with young professionals.
Job Category: Development
Job Type: Full Time
Job Location: Bangalore
● Able contribute to the gathering of functional requirements, developing technical
specifications, and project & test planning
● Demonstrating technical expertise, and solving challenging programming and design
● Roughly 80% hands-on coding
● Generate technical documentation and PowerPoint presentations to communicate
architectural and design options, and educate development teams and business users
● Resolve defects/bugs during QA testing, pre-production, production, and post-release
● Work cross-functionally with various bidgely teams including: product management,
QA/QE, various product lines, and/or business units to drive forward results
● BS/MS in computer science or equivalent work experience
● 2-4 years’ experience designing and developing applications in Data Engineering
● Hands-on experience with Big data Eco Systems.
● Hadoop,Hdfs,Map Reduce,YARN,AWS Cloud, EMR, S3, Spark, Cassandra, Kafka,
● Expertise with any of the following Object-Oriented Languages (OOD): Java/J2EE,Scala,
● Strong leadership experience: Leading meetings, presenting if required
● Excellent communication skills: Demonstrated ability to explain complex technical
issues to both technical and non-technical audiences
● Expertise in the Software design/architecture process
● Expertise with unit testing & Test-Driven Development (TDD)
● Experience on Cloud or AWS is preferable
● Have a good understanding and ability to develop software, prototypes, or proofs of
concepts (POC's) for various Data Engineering requirements.
Must Have Skills:
Role requires experience in AWS and also programming experience in Python and Spark
Roles & Responsibilities
- Translate functional requirements into technical design
- Interact with clients and internal stakeholders to understand the data and platform requirements in detail and determine core cloud services needed to fulfil the technical design
- Design, Develop and Deliver data integration interfaces in the AWS
- Design, Develop and Deliver data provisioning interfaces to fulfil consumption needs
- Deliver data models on Cloud platform, it could be on AWS Redshift, SQL.
- Design, Develop and Deliver data integration interfaces at scale using Python / Spark
- Automate core activities to minimize the delivery lead times and improve the overall quality
- Optimize platform cost by selecting right platform services and architecting the solution in a cost-effective manner
- Manage code and deploy DevOps and CI CD processes
- Deploy logging and monitoring across the different integration points for critical alerts
- Minimum 5 years of software development experience
- Bachelor's and/or Master’s degree in computer science
- Strong Consulting skills in data management including data governance, data quality, security, data integration, processing and provisioning
- Delivered data management projects in any of the AWS
- Translated complex analytical requirements into technical design including data models, ETLs and Dashboards / Reports
- Experience deploying dashboards and self-service analytics solutions on both relational and non-relational databases
- Experience with different computing paradigms in databases such as In-Memory, Distributed, Massively Parallel Processing
- Successfully delivered large scale data management initiatives covering Plan, Design, Build and Deploy phases leveraging different delivery methodologies including Agile
- Strong knowledge of continuous integration, static code analysis and test-driven development
- Experience in delivering projects in a highly collaborative delivery model with teams at onsite and offshore
- Must have Excellent analytical and problem-solving skills
- Delivered change management initiatives focused on driving data platforms adoption across the enterprise
- Strong verbal and written communications skills are a must, as well as the ability to work effectively across internal and external organizations