1+ years of proven experience in ML/AI with Python
Work with the manager through the entire analytical and machine learning model life cycle:
⮚ Define the problem statement
⮚ Build and clean datasets
⮚ Exploratory data analysis
⮚ Feature engineering
⮚ Apply ML algorithms and assess the performance
⮚ Codify for deployment
⮚ Test and troubleshoot the code
⮚ Communicate analysis to stakeholders
Technical Skills
⮚ Proven experience in usage of Python and SQL
⮚ Excellent in programming and statistics
⮚ Working knowledge of tools and utilities - AWS, DevOps with Git, Selenium, Postman, Airflow, PySpark
About Fintech Company
Similar jobs
B1 – Data Scientist - Kofax Accredited Developers
Requirement – 3
Mandatory –
- Accreditation of Kofax KTA / KTM
- Experience in Kofax Total Agility Development – 2-3 years minimum
- Ability to develop and translate functional requirements to design
- Experience in requirement gathering, analysis, development, testing, documentation, version control, SDLC, Implementation and process orchestration
- Experience in Kofax Customization, writing Custom Workflow Agents, Custom Modules, Release Scripts
- Application development using Kofax and KTM modules
- Good/Advance understanding of Machine Learning /NLP/ Statistics
- Exposure to or understanding of RPA/OCR/Cognitive Capture tools like Appian/UI Path/Automation Anywhere etc
- Excellent communication skills and collaborative attitude
- Work with multiple teams and stakeholders within like Analytics, RPA, Technology and Project management teams
- Good understanding of compliance, data governance and risk control processes
Total Experience – 7-10 Years in BPO/KPO/ ITES/BFSI/Retail/Travel/Utilities/Service Industry
Good to have
- Previous experience of working on Agile & Hybrid delivery environment
- Knowledge of VB.Net, C#( C-Sharp ), SQL Server , Web services
Qualification -
- Masters in Statistics/Mathematics/Economics/Econometrics Or BE/B-Tech, MCA or MBA
Greetings!!!!
We are looking for a data engineer for one of our premium clients for their Chennai and Tirunelveli location
Required Education/Experience
● Bachelor’s degree in computer Science or related field
● 5-7 years’ experience in the following:
● Snowflake, Databricks management,
● Python and AWS Lambda
● Scala and/or Java
● Data integration service, SQL and Extract Transform Load (ELT)
● Azure or AWS for development and deployment
● Jira or similar tool during SDLC
● Experience managing codebase using Code repository in Git/GitHub or Bitbucket
● Experience working with a data warehouse.
● Familiarity with structured and semi-structured data formats including JSON, Avro, ORC, Parquet, or XML
● Exposure to working in an agile work environment
Zycus is looking for applicants with a strong background in Analytics and Data mining (Web, Social and Big data), Machine Learning, Pattern Recognition, Natural Language Processing, Computational Linguistics, Statistical Modelling, Inferencing, Information Retrieval, Large Scale Distributed Systems, Cloud Computing, Econometrics, Quantitative Marketing, Applied Game Theory, Mechanism Design, Operations Research, Optimization, Human Computer Interaction and Information Visualization. Applicants with a background in other quantitative areas are also encouraged to apply.
We are looking for someone who can create and implement AI solutions. If you have built a product like IBM WATSON in the past and not just used WATSON to build applications, this could be the perfect role for you.
All successful candidates are expected to dive deep into problem areas of Zycus’ interest and invent technology solutions to not only advance the current products, but also to generate new product options that can strategically advantage the organization.
Roles & Responsibilities:
- Act as a technical thought leader in collaboration with the analytics leadership team, helping to set the strategy and standards for Machine Learning and advanced analytics
- Work with senior leaders from all functions to explore opportunities for using advance analytics
- Provide technical leadership, coaching, and mentoring to talented data scientists and analytics professionals
- Guide data scientists in the use of advanced statistical, machine learning, and artificial intelligence methodologies
- Guide the work of other Machine learning team members to provide support and assistance, while also ensuring quality
- 3+ years of Experience majoring in applying AI/ML/ NLP / deep learning / data-driven statistical analysis & modelling solutions.
- Programming skills in Python, knowledge in Statistics.
- Hands-on experience developing supervised and unsupervised machine learning algorithms (regression, decision trees/random forest, neural networks, feature selection/reduction, clustering, parameter tuning, etc.). Familiarity with reinforcement learning is highly desirable.
- Experience in the financial domain and familiarity with financial models are highly desirable.
- Experience in image processing and computer vision.
- Experience working with building data pipelines.
- Good understanding of Data preparation, Model planning, Model training, Model validation, Model deployment and performance tuning.
- Should have hands on experience with some of these methods: Regression, Decision Trees,CART, Random Forest, Boosting, Evolutionary Programming, Neural Networks, Support Vector Machines, Ensemble Methods, Association Rules, Principal Component Analysis, Clustering, ArtificiAl Intelligence
- Should have experience in using larger data sets using Postgres Database.
Roles and Responsibilities
- Managing available resources such as hardware, data, and personnel so that deadlines are met.
- Analyzing the ML and Deep Learning algorithms that could be used to solve a given problem and ranking them by their success probabilities
- Exploring data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world
- Defining validation framework and establish a process to ensure acceptable data quality criteria are met
- Supervising the data acquisition and partnership roadmaps to create stronger product for our customers.
- Defining feature engineering process to ensure usage of meaningful features given the business constraints which may vary by market
- Device self-learning strategies through analysis of errors from the models
- Understand business issues and context, devise a framework for solving unstructured problems and articulate clear and actionable solutions underpinned by analytics.
- Manage multiple projects simultaneously while demonstrating business leadership to collaborate & coordinate with different functions to deliver the solutions in a timely, efficient and effective manner.
- Manage project resources optimally to deliver projects on time; drive innovation using residual resources to create strong solution pipeline; provide direction, coaching & training, feedbacks to project team members to enhance performance, support development and encourage value aligned behaviour of the project team members; Provide inputs for periodic performance appraisal of project team members.
Preferred Technical & Professional expertise
- Undergraduate Degree in Computer Science / Engineering / Mathematics / Statistics / economics or other quantitative fields
- At least 2+ years of experience of managing Data Science projects with specializations in Machine Learning
- In-depth knowledge of cloud analytics tools.
- Able to drive Python Code optimization; ability review codes and provide inputs to improve the quality of codes
- Ability to evaluate hardware selection for running ML models for optimal performance
- Up to date with Python libraries and versions for machine learning; Extensive hands-on experience with Regressors; Experience working with data pipelines.
- Deep knowledge of math, probability, statistics and algorithms; Working knowledge of Supervised Learning, Adversarial Learning and Unsupervised learning
- Deep analytical thinking with excellent problem-solving abilities
- Strong verbal and written communication skills with a proven ability to work with all levels of management; effective interpersonal and influencing skills.
- Ability to manage a project team through effectively allocation of tasks, anticipating risks and setting realistic timelines for managing the expectations of key stakeholders
- Strong organizational skills and an ability to balance and handle multiple concurrent tasks and/or issues simultaneously.
- Ensure that the project team understand and abide by compliance framework for policies, data, systems etc. as per group, region and local standards
What is the role?
You will be responsible for developing and designing front-end web architecture, ensuring the responsiveness of applications, and working alongside graphic designers for web design features, among other duties. You will be responsible for the functional/technical track of the project
Key Responsibilities
- Develop and automate large-scale, high-performance data processing systems (batch and/or streaming).
- Build high-quality software engineering practices towards building data infrastructure and pipelines at scale.
- Lead data engineering projects to ensure pipelines are reliable, efficient, testable, & maintainable
- Optimize performance to meet high throughput and scale
What are we looking for?
- 4+ years of relevant industry experience.
- Working with data at the terabyte scale.
- Experience designing, building and operating robust distributed systems.
- Experience designing and deploying high throughput and low latency systems with reliable monitoring and logging practices.
- Building and leading teams.
- Working knowledge of relational databases like Postgresql/MySQL.
- Experience with Python / Spark / Kafka / Celery
- Experience working with OLTP and OLAP systems
- Excellent communication skills, both written and verbal.
- Experience working in cloud e.g., AWS, Azure or GCP
Whom will you work with?
You will work with a top-notch tech team, working closely with the architect and engineering head.
What can you look for?
A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits of being at this company
We are
We strive to make selling fun with our SaaS incentive gamification product. Company is the #1 gamification software that automates and digitizes Sales Contests and Commission Programs. With game-like elements, rewards, recognitions, and complete access to relevant information, Company turbocharges an entire salesforce. Company also empowers Sales Managers with easy-to-publish game templates, leaderboards, and analytics to help accelerate performances and sustain growth.
We are a fun and high-energy team, with people from diverse backgrounds - united under the passion of getting things done. Rest assured that you shall get complete autonomy in your tasks and ample opportunities to develop your strengths.
Way forward
If you find this role exciting and want to join us in Bangalore, India, then apply by clicking below. Provide your details and upload your resume. All received resumes will be screened, shortlisted candidates will be requested to join for a discussion and on mutual alignment and agreement, we will proceed with hiring.
About Us
Zupee is India’s fastest-growing innovator in real money gaming with a focus on predominant skill-focused games. Started by 2 IIT-Kanpur alumni in 2018, we are backed by marquee global investors such as WestCap Group, Tomales Bay Capital, Matrix Partners, Falcon Edge, Orios Ventures, and Smile Group.
Know more about our recent funding: https://bit.ly/3AHmSL3
Our focus has been on innovating in the board, strategy, and casual games sub-genres. We innovate to ensure our games provide an intersection between skill and entertainment, enabling our users to earn while they play.
Location: We are location agnostic & our teams work from anywhere. Physically we are based out of Gurugram
Core Responsibilities:
- Responsible for designing, developing, testing, and deploying ML models that can leverage multiple signal sources, to build a personalized user experience
- Work closely with the business stakeholders to identify the potential Data science applications
- Contribute by doing opportunity analysis, building project proposals, designing and implementation of ML projects, in the areas of Ranking, Embeddings, Recommendation engines, etc
- Collaborate with software engineering teams to design experiments, model implementations, and new feature creation
- Clearly communicate technical details, strategies, and outcomes to a business audience
What are we looking for
- 4+ years of hands-on experience in data science using techniques including but not limited to regression, classification, NLP etc.
- Previous experience in model deployment, model monitoring, optimization and model interpretability.
- Expertise with Random forests, Gradient boosting, KNN, Regression and unsupervised learning algorithms
- Experience in using Neural Networks like ANN, RNN, Reinforcement Learning or Deep Learning etc.
- Solid understanding of Python and common machine learning frameworks such as XGBoost, scikit-learn, PyTorch/Tensorflow
- Outstanding problem-solving skills, with demonstrated ability to think creatively and strategically
- Technology-driven mindset, up-to-date with digital and technology literature, trends.
- Must have knowledge of Experimentation and Basic Statistics
- Must have experience in Predictive Analytics
Non-Desired Skills:
- Experience in Descriptive Analytics – Don’t apply
- Experience in NLP Text Analysis, Image, Speech Analysis – Don’t apply
- Experience with relational SQL & NoSQL databases including MySQL & MongoDB.
- Familiar with the basic principles of distributed computing and data modeling.
- Experience with distributed data pipeline frameworks like Celery, Apache Airflow, etc.
- Experience with NLP and NER models is a bonus.
- Experience building reusable code and libraries for future use.
- Experience building REST APIs.
Preference for candidates working in tech product companies
We are looking for applicants with a strong background in Analytics and Data mining (Web, Social and Big data), Machine Learning and Pattern Recognition, Natural Language Processing and Computational Linguistics, Statistical Modelling and Inferencing, Information Retrieval, Large Scale Distributed Systems and Cloud Computing, Econometrics and Quantitative Marketing, Applied Game Theory and Mechanism Design, Operations Research and Optimization, Human Computer Interaction and Information Visualization. Applicants with a background in other quantitative areas are also encouraged to apply.
We are looking for someone who can create and implement AI solutions. If you have built a product like IBM WATSON in the past and not just used WATSON to build applications, this could be the perfect role for you.
All successful candidates are expected to dive deep into problem areas of Zycus’ interest and invent technology solutions to not only advance the current products, but also to generate new product options that can strategically advantage the organization.
Skills:
- Experience in predictive modelling and predictive software development
- Skilled in Java, C++, Perl/Python (or similar scripting language)
- Experience in using R, Matlab, or any other statistical software
- Experience in mentoring junior team members, and guiding them on machine learning and data modelling applications
- Strong communication and data presentation skills
- Classification (svm, decision tree, random forest, neural network)
- Regression (linear, polynomial, logistic, etc)
- Classical Optimization(gradient descent, newton raphson, etc)
- Graph theory (network analytics)
- Heuristic optimisation (genetic algorithm, swarm theory)
- Deep learning (lstm, convolutional nn, recurrent nn)
Must Have:
- Experience: 3-9 years
- The ideal candidate must have proven expertise in Artificial Intelligence (including deep learning algorithms), Machine Learning and/or NLP
- The candidate must also have expertise in programming traditional machine learning algorithms, algorithm design & usage
- Preferred experience with large data sets & distributed computing in Hadoop ecosystem
- Fluency with databases
Your mission is to help lead team towards creating solutions that improve the way our business is run. Your knowledge of design, development, coding, testing and application programming will help your team raise their game, meeting your standards, as well as satisfying both business and functional requirements. Your expertise in various technology domains will be counted on to set strategic direction and solve complex and mission critical problems, internally and externally. Your quest to embracing leading-edge technologies and methodologies inspires your team to follow suit.
Responsibilities and Duties :
- As a Data Engineer you will be responsible for the development of data pipelines for numerous applications handling all kinds of data like structured, semi-structured &
unstructured. Having big data knowledge specially in Spark & Hive is highly preferred.
- Work in team and provide proactive technical oversight, advice development teams fostering re-use, design for scale, stability, and operational efficiency of data/analytical solutions
Education level :
- Bachelor's degree in Computer Science or equivalent
Experience :
- Minimum 5+ years relevant experience working on production grade projects experience in hands on, end to end software development
- Expertise in application, data and infrastructure architecture disciplines
- Expert designing data integrations using ETL and other data integration patterns
- Advanced knowledge of architecture, design and business processes
Proficiency in :
- Modern programming languages like Java, Python, Scala
- Big Data technologies Hadoop, Spark, HIVE, Kafka
- Writing decently optimized SQL queries
- Orchestration and deployment tools like Airflow & Jenkins for CI/CD (Optional)
- Responsible for design and development of integration solutions with Hadoop/HDFS, Real-Time Systems, Data Warehouses, and Analytics solutions
- Knowledge of system development lifecycle methodologies, such as waterfall and AGILE.
- An understanding of data architecture and modeling practices and concepts including entity-relationship diagrams, normalization, abstraction, denormalization, dimensional
modeling, and Meta data modeling practices.
- Experience generating physical data models and the associated DDL from logical data models.
- Experience developing data models for operational, transactional, and operational reporting, including the development of or interfacing with data analysis, data mapping,
and data rationalization artifacts.
- Experience enforcing data modeling standards and procedures.
- Knowledge of web technologies, application programming languages, OLTP/OLAP technologies, data strategy disciplines, relational databases, data warehouse development and Big Data solutions.
- Ability to work collaboratively in teams and develop meaningful relationships to achieve common goals
Skills :
Must Know :
- Core big-data concepts
- Spark - PySpark/Scala
- Data integration tool like Pentaho, Nifi, SSIS, etc (at least 1)
- Handling of various file formats
- Cloud platform - AWS/Azure/GCP
- Orchestration tool - Airflow