50+ Machine Learning (ML) Jobs in Bangalore (Bengaluru) | Machine Learning (ML) Job openings in Bangalore (Bengaluru)
Apply to 50+ Machine Learning (ML) Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Machine Learning (ML) Job opportunities across top companies like Google, Amazon & Adobe.
Mammoth
Mammoth is a data management platform revolutionizing the way people work with data. Our lightweight, self-serve SaaS analytics solution takes care of data ingestion, storage, cleansing, transformation, and exploration, empowering users to derive insights with minimal friction. Based in London, with offices in Portugal and Bangalore, we offer flexibility in work locations, whether you prefer remote or in-person office interactions.
Job Responsibilities
- Collaboratively build robust, scalable, and maintainable software across the stack.
- Design, develop, and maintain APIs and web services.
- Dive into complex challenges around performance, scalability, concurrency, and deliver quality results.
- Improve code quality through writing unit tests, automation, and conducting code reviews.
- Work closely with the design team to understand user requirements, formulate use cases, and translate them into effective technical solutions.
- Contribute ideas and participate in brainstorming, design, and architecture discussions.
- Engage with modern tools and frameworks to enhance development processes.
Skills Required
Frontend:
- Proficiency in Vue.js and a solid understanding of JavaScript.
- Strong grasp of HTML/CSS, with attention to detail.
- Knowledge of TypeScript is a plus.
Backend:
- Strong proficiency in Python and mastery of at least one framework (Django, Flask, Pyramid, FastAPI or litestar).
- Experience with database systems such as PostgreSQL.
- Familiarity with performance trade-offs and best practices for backend systems.
General:
- Solid understanding of fundamental computer science concepts like algorithms, data structures, databases, operating systems, and programming languages.
- Experience in designing and building RESTful APIs and web services.
- Ability to collaborate effectively with cross-functional teams.
- Passion for solving challenging technical problems with innovative solutions.
- AI/ML or Willingness to learn on need basis.
Nice to have:
- Familiarity with DevOps tools like Docker, Kubernetes, and Ansible.
- Understanding of frontend build processes using tools like Vite.
- Demonstrated experience with end-to-end development projects or personal projects.
Job Perks
- Free lunches and stocked kitchen with fruits and juices.
- Game breaks to unwind.
- Work in a spacious and serene office located in Koramangala, Bengaluru.
- Opportunity to contribute to a groundbreaking platform with a passionate and talented team.
If you’re an enthusiastic developer who enjoys working across the stack, thrives on solving complex problems, and is eager to contribute to a fast-growing, mission-driven company, apply now!
Client based at Bangalore location.
Job Title: Solution Architect
Work Location: Tokyo
Experience: 7-10 years
Number of Positions: 3
Job Description:
We are seeking a highly skilled Solution Architect to join our dynamic team in Tokyo. The ideal candidate will have substantial experience in designing, implementing, and deploying cutting-edge solutions involving Machine Learning (ML), Cloud Computing, Full Stack Development, and Kubernetes. The Solution Architect will play a key role in architecting and delivering innovative solutions that meet business objectives while leveraging advanced technologies and industry best practices.
Responsibilities:
- Collaborate with stakeholders to understand business needs and translate them into scalable and efficient technical solutions.
- Design and implement complex systems involving Machine Learning, Cloud Computing (at least two major clouds such as AWS, Azure, or Google Cloud), and Full Stack Development.
- Lead the design, development, and deployment of cloud-native applications with a focus on NoSQL databases, Python, and Kubernetes.
- Implement algorithms and provide scalable solutions, with a focus on performance optimization and system reliability.
- Review, validate, and improve architectures to ensure high scalability, flexibility, and cost-efficiency in cloud environments.
- Guide and mentor development teams, ensuring best practices are followed in coding, testing, and deployment.
- Contribute to the development of technical documentation and roadmaps.
- Stay up-to-date with emerging technologies and propose enhancements to the solution design process.
Key Skills & Requirements:
- Proven experience (7-10 years) as a Solution Architect or similar role, with deep expertise in Machine Learning, Cloud Architecture, and Full Stack Development.
- Expertise in at least two major cloud platforms (AWS, Azure, Google Cloud).
- Solid experience with Kubernetes for container orchestration and deployment.
- Strong hands-on experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB, etc.).
- Proficiency in Python, including experience with ML frameworks (such as TensorFlow, PyTorch, etc.) and libraries for algorithm development.
- Must have implemented at least two algorithms (e.g., classification, clustering, recommendation systems, etc.) in real-world applications.
- Strong experience in designing scalable architectures and applications from the ground up.
- Experience with DevOps and automation tools for CI/CD pipelines.
- Excellent problem-solving skills and ability to work in a fast-paced environment.
- Strong communication skills and ability to collaborate with cross-functional teams.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
Preferred Skills:
- Experience with microservices architecture and containerization.
- Knowledge of distributed systems and high-performance computing.
- Certifications in cloud platforms (AWS Certified Solutions Architect, Google Cloud Professional Cloud Architect, etc.).
- Familiarity with Agile methodologies and Scrum.
- Knowing Japanese Language is an additional advantage for the Candidate. Not mandatory.
Overview:-
DocLens.ai is a US-based Insurtech startup founded by industry veterans (ex-CapGemini, IBM, CNA, Cisco Meraki, EXL) from the fields of Insurance, Technology, and AI. We are committed to transforming the insurance sector with cutting-edge AI-driven Risk Assistant solutions.
We are seeking an experienced Cloud Integration Engineer with a strong background in Azure AI. The role focuses on supporting post-sales client integrations by migrating solutions from Azure to AWS while ensuring seamless implementation of our AI-based products.
This is a hands-on role involving technical work, client engagement, and cross-functional collaboration to ensure successful migration and product integration.
Key Responsibilities:-
- Serve as the main point of contact for assigned Insurtech AI customers.
- Develop and maintain strong relationships with clients to ensure they achieve their desired outcomes with the AI project.
- Coordinate the implementation of Insurtech AI solutions on Azure, ensuring seamless delivery and execution.
- Work with internal and external teams (engineering, data science, cloud specialists) to customize solutions that meet the insurance sector's requirements. This will require spending time at the customer location (Bangalore itself).
- Track project milestones, ensuring timely delivery and resolution of any technical or business issues.
- Conduct regular check-ins with clients to review the performance of the AI solution, ensuring optimal usage of Azure’s cloud and AI services.
- Analyze customer feedback and usage patterns to recommend enhancements or additional Azure services that can improve their Insurtech AI outcomes.
- Develop training materials, user guides, and best practices to help clients maximize their use of AI features on Azure.
- Use Azure’s analytics and reporting tools to provide clients with insights into AI model performance, helping them make data-driven decisions.
- Monitor KPIs and other success metrics, ensuring that clients meet their key objectives with AI adoption.
- Act as the customer’s voice internally, ensuring product, engineering, and support teams are aligned with client goals.
- Participate in troubleshooting sessions to resolve any technical issues in collaboration with Azure’s support and cloud engineering teams.
Qualifications:-
Experience:
- 6+ years of experience in cloud solutions in a SaaS, Cloud, or AI-focused environment.
- Hands-on experience with Azure AI services (Azure AI, Machine Learning, Cognitive Services).
- Familiarity with AWS cloud services and data workflows for seamless migration and integration.
- Experience working with Insurtech or insurance clients is a strong advantage.
Technical Skills:
- Expertise in Azure services and tools, with working knowledge of AWS services such as S3, Lambda, SageMaker, and Glue.
- Basic understanding of AI/ML models and their practical applications in business.
- Proficiency in managing data transformations, pipelines, and analytics workflows.
Soft Skills:
- Strong communication and interpersonal skills for managing client relationships.
- Proven ability to solve complex problems and drive technical discussions to resolution.
- Ability to work collaboratively in cross-functional teams.
Education:
- Bachelor's degree in Computer Science, Engineering, or a related technical field.
What We Offer:-
- Competitive salary and performance-based bonuses.
- Opportunity to work on impactful projects at the intersection of AI and Insurance.
- A collaborative, innovative work environment with an early-stage startup.
- Hands-on experience with cutting-edge technologies across Azure and AWS ecosystems.
AI/ML Data Solution Architecture Ability to define data architecture strategy that align with business goals for AI use cases and ensures data availability & data accuracy. Ensuring data quality by implementing and enforcing data governance policies and best practices Technology evaluation & selection Execute Proof of concept and Proof of value for various technology solutions and frameworks Continuous Learning - Staying up to date with emerging technologies and trends in Data Engineering, AI / ML, and GenAI, and making recommendations for their adoption wherever appropriate Team Management Responsible for leading a team of data architects and data engineers, as well as coordinating with vendors and technology partners Collaboration & communication Collaborate and work closely with executives, stakeholder and business teams to effectively communicate architecture strategy & clearly articulate the business value.
Experience : 12 to 16 yrs
Work location : JP Nagar 3rd phase, South Bangalore. (Work from office role, IC role to begin with)
Suitable candidate be able to demonstrate strong experience in the following areas -
Data Engineering
- Hands-on experience with data engineering tools such as Talend (or Informatica or AbInitio), Databricks (or Spark), and HVR (or Attunity or Golden Gate or Equalum).
- Working knowledge of data build tools, Azure Data Factory, continuous integration and continuous delivery (CI/CD), automated testing, data lakes, data warehouses, big data, Collibra, and Unity Catalog
- Basic knowledge of building analytics applications using Azure Purview, Power BI, Spotfire, and Azure Machine Learning.
AI & ML -
- Conceptualize & design end-to-end solution view of sourcing data, pre-processing data, feature stores, model development, evaluation, deployment & governance
- Define model development best practices, create POVs on emerging AI trends, drive the proposal responses, help solving complex analytics problems and strategize the end-to-end implementation frameworks & methodologies
- Thorough understanding of database, streaming & analytics services offered by popular cloud platforms (Azure, AWS, GCP) and hands-on implementation of building machine learning pipeline with at least one of the popular cloud platforms
- Expertise on Large Language Model preferred with exposure to implementing generative AI using ChatGPT / Open AI and other models. Harvesting Models from open source will be an added advantage
- Good understanding of statistical analysis, data analysis and knowledge of data management & visualization techniques
- Exposure to other AI Platforms & products (Kore.ai, expert.ai, Dataiku etc.) desired
- Hands-on development experience in Python/R is a must and additional hands-on experience on few other procedural/object-oriented programming languages (Java, C#, C++) is desirable.
- Leadership skills to drive the AI/ML related conversation amidst CXO, Senior Leadership and making impactful presentations to customer organizations
Stakeholder Management & Communication Skills
- Excellent communication, negotiation, influencing and stakeholder management skills
- Preferred to have experience in project management, particularly in executing projects using Agile delivery frameworks
- Customer focus and excellent problem-solving skills
Qualification
- BE or MTech (BSc or MSc) in engineering, sciences, or equivalent relevant experience required.
- Total 13+ years of experience and 10+ years of experience in building/managing/administrating data and analytics applications is required
- Designing solution architecture and present the architecture in architecture review forums
Additional Qualifications
- Ability to define best practices for data governance, data quality, and data lineage, and to operationalize those practices.
- Proven track record of designing and delivering solutions that comply with industry regulations and legislation such as GxP, SoX, HIPAA, and GDPR.
Building the machine learning production (or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. This position is an opportunity for an experienced, server-side developer to build expertise in this exciting new frontier. You will be part of a team deploying state-of-the-art AI solutions for Fractal clients.
Responsibilities
As MLOps Engineer, you will work collaboratively with Data Scientists and Data engineers to deploy and operate advanced analytics machine learning models. You’ll help automate and streamline Model development and Model operations. You’ll build and maintain tools for deployment, monitoring, and operations. You’ll also troubleshoot and resolve issues in development, testing, and production environments.
- Enable Model tracking, model experimentation, Model automation
- Develop ML pipelines to support
- Develop MLOps components in Machine learning development life cycle using Model Repository (either of): MLFlow, Kubeflow Model Registry
- Develop MLOps components in Machine learning development life cycle using Machine Learning Services (either of): Kubeflow, DataRobot, HopsWorks, Dataiku or any relevant ML E2E PaaS/SaaS
- Work across all phases of Model development life cycle to build MLOPS components
- Build the knowledge base required to deliver increasingly complex MLOPS projects on Azure
- Be an integral part of client business development and delivery engagements across multiple domains
Required Qualifications
- 3-5 years experience building production-quality software.
- B.E/B.Tech/M.Tech in Computer Science or related technical degree OR Equivalent
- Strong experience in System Integration, Application Development or Data Warehouse projects across technologies used in the enterprise space
- Knowledge of MLOps, machine learning and docker
- Object-oriented languages (e.g. Python, PySpark, Java, C#, C++)
- CI/CD experience( i.e. Jenkins, Git hub action,
- Database programming using any flavors of SQL
- Knowledge of Git for Source code management
- Ability to collaborate effectively with highly technical resources in a fast-paced environment
- Ability to solve complex challenges/problems and rapidly deliver innovative solutions
- Foundational Knowledge of Cloud Computing on Azure
- Hunger and passion for learning new skills
Building the machine learning production System(or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. This position is an opportunity for an experienced, server-side developer to build expertise in this exciting new frontier. You will be part of a team deploying state-ofthe-art AI solutions for Fractal clients.
Responsibilities
As MLOps Engineer, you will work collaboratively with Data Scientists and Data engineers to deploy and operate advanced analytics machine learning models. You’ll help automate and streamline Model development and Model operations. You’ll build and maintain tools for deployment, monitoring, and operations. You’ll also troubleshoot and resolve issues in development, testing, and production environments.
- Enable Model tracking, model experimentation, Model automation
- Develop scalable ML pipelines
- Develop MLOps components in Machine learning development life cycle using Model Repository (either of): MLFlow, Kubeflow Model Registry
- Machine Learning Services (either of): Kubeflow, DataRobot, HopsWorks, Dataiku or any relevant ML E2E PaaS/SaaS
- Work across all phases of Model development life cycle to build MLOPS components
- Build the knowledge base required to deliver increasingly complex MLOPS projects on Azure
- Be an integral part of client business development and delivery engagements across multiple domains
Required Qualifications
- 5.5-9 years experience building production-quality software
- B.E/B.Tech/M.Tech in Computer Science or related technical degree OR equivalent
- Strong experience in System Integration, Application Development or Datawarehouse projects across technologies used in the enterprise space
- Expertise in MLOps, machine learning and docker
- Object-oriented languages (e.g. Python, PySpark, Java, C#, C++)
- Experience developing CI/CD components for production ready ML pipeline.
- Database programming using any flavors of SQL
- Knowledge of Git for Source code management
- Ability to collaborate effectively with highly technical resources in a fast-paced environment
- Ability to solve complex challenges/problems and rapidly deliver innovative solutions
- Team handling, problem solving, project management and communication skills & creative thinking
- Foundational Knowledge of Cloud Computing on Azure
- Hunger and passion for learning new skills
a leading data & analytics intelligence technology solutions provider to companies that value insights from information as a competitive advantage
What are we looking for?
- Bachelor’s degree in analytics related area (Data Science, Computer Engineering, Computer Science, Information Systems, Engineering, or a related discipline)
- 7+ years of work experience in data science, analytics, engineering, or product management for a diverse range of projects
- Hands on experience with python, deploying ML models
- Hands on experience with time-wise tracking of model performance, and diagnosis of data/model drift
- Familiarity with Dataiku, or other data-science-enabling tools (Sagemaker, etc)
- Demonstrated familiarity with distributed computing frameworks (snowpark, pyspark)
- Experience working with various types of data (structured / unstructured)
- Deep understanding of all data science phases (eg. Data engineering, EDA, Machine Learning, MLOps, Serving)
- Highly self-motivated to deliver both independently and with strong team collaboration
- Ability to creatively take on new challenges and work outside comfort zone
- Strong English communication skills (written & verbal)
Roles & Responsibilities:
- Clearly articulates expectations, capabilities, and action plans; actively listens with others’ frame of reference in mind; appropriately shares information with team; favorably influences people without direct authority
- Clearly articulates scope and deliverables of projects; breaks complex initiatives into detailed component parts and sequences actions appropriately; develops action plans and monitors progress independently; designs success criteria and uses them to track outcomes; drives implementation of recommendations when appropriate, engages with stakeholders throughout to ensure buy-in
- Manages projects with and through others; shares responsibility and credit; develops self and others through teamwork; comfortable providing guidance and sharing expertise with others to help them develop their skills and perform at their best; helps others take appropriate risks; communicates frequently with team members earning respect and trust of the team
- Experience in translating business priorities and vision into product/platform thinking, set clear directives to a group of team members with diverse skillsets, while providing functional & technical guidance and SME support
- Demonstrated experience interfacing with internal and external teams to develop innovative data science solutions
- Strong business analysis, product design, and product management skills
- Ability to work in a collaborative environment - reviewing peers' code, contributing to problem- solving sessions, ability to communicate technical knowledge to a a variety of audience (such as management, brand teams, data engineering teams, etc.)
- Ability to articulate model performance to a non-technical crowd, and ability to select appropriate evaluation criteria to evaluate hidden confounders and biases within a model
- MLOps frameworks and their use in model tracking and deployment, and automating the model serving pipeline
- Work with all sizes of ML model from linear/logistic regression to other sklearn-like models, to deep learning
- Formulate training schema for unbiased model training e.g. K-fold cross-validation, leave-one- out cross-validation) for parameter searching and model tuning
- Ability to work on machine learning work like recommender systems, end-to-end ML lifecycle)
- Ability to manage ML on largely imbalanced training sets (<5% positive rate)
- Ability to articulate model performance to a non-technical crowd, and ability to select appropriate evaluation criteria to evaluate hidden confounders and biases within a model
- MLOps frameworks and their use in model tracking and deployment, and automating the model serving pipeline
Client based at Bangalore location.
Data Scientist - Healthcare AI
Location: Bangalore, India
Experience: 4+ years
Skills Required: Radiology, visual images, text, classical models, LLM multi-modal
Responsibilities:
· LLM Development and Fine-tuning: Fine-tune and adapt large language models (e.g., GPT, Llama2, Mistral) for specific healthcare applications, such as text classification, named entity recognition, and question answering.
· Data Engineering: Collaborate with data engineers to build robust data pipelines for large-scale text datasets used in LLM training and fine-tuning.
· Model Evaluation and Optimization: Develop rigorous experimentation frameworks to assess model performance, identify areas for improvement, and inform model selection.
· Production Deployment: Work closely with MLOps and Data Engineering teams to integrate models into scalable production systems.
· Predictive Model Design: Leverage machine learning/deep learning and LLM methods to design, build, and deploy predictive models in oncology (e.g., survival models).
· Cross-functional Collaboration: Partner with product managers, domain experts, and stakeholders to understand business needs and drive the successful implementation of data science solutions.
· Knowledge Sharing: Mentor junior team members and stay up-to-date with the latest advancements in machine learning and LLMs.
Qualifications:
· Doctoral or master's degree in computer science, Data Science, Artificial Intelligence, or related field.
· 5+ years of hands-on experience in designing, implementing, and deploying machine learning and deep learning models.
· 12+ months of in-depth experience working with LLMs. Proficiency in Python and NLP-focused libraries (e.g., spaCy, NLTK, Transformers, TensorFlow/PyTorch).
· Experience working with cloud-based platforms (AWS, GCP, Azure).
Preferred Qualifications:
o Experience working in the healthcare domain, particularly oncology.
o Publications in relevant scientific journals or conferences.
o Degree from a prestigious university or research institution.
at Zolvit (formerly Vakilsearch)
Role Overview:
We are looking for a skilled Data Scientist with expertise in data analytics, machine learning, and AI to join our team. The ideal candidate will have a strong command of data tools, programming, and knowledge of LLMs and Generative AI, contributing to the growth and automation of our business processes.
Key Responsibilities:
- Data Analysis & Visualization:
- Develop and manage data pipelines, ensuring data accuracy and integrity.
- Design and implement insightful dashboards using Power BI to help stakeholders make data-driven decisions.
- Extract and analyze complex data sets using SQL to generate actionable insights
2 Machine Learning & AI Models:
- Build and deploy machine learning models to optimize key business functions like discount management, lead qualification, and process automation.
- Apply Natural Language Processing (NLP) techniques for text extraction, analysis, and classification from customer documents.
- Implement and fine-tune Generative AI models and large language models (LLMs) for various business applications, including prompt engineering for automation tasks.
3 Automation & Innovation:
- Use AI to streamline document verification, data extraction, and customer interaction processes.
- Innovate and automate manual processes, creating AI-driven solutions for internal teams and customer-facing systems.
- Stay abreast of the latest advancements in machine learning, NLP, and generative AI, applying them to real-world business challenges.
Qualifications:
- Bachelor's or Master’s degree in Computer Science, Data Science, Statistics, or related field.
- 4-7 years of experience as a Data Scientist, with proficiency in Python, SQL, Power BI, and Excel.
- Expertise in building machine learning models and utilizing NLP techniques for text processing and automation.
- Experience in working with large language models (LLMs) and generative AI to create efficient and scalable solutions.
- Strong problem-solving skills, with the ability to work independently and in teams.
- Excellent communication skills, with the ability to present complex data in a simple, actionable way to non-technical stakeholders.
If you’re excited about leveraging data and AI to solve real-world problems, we’d love to have you on our team!
Client based at Bangalore location.
Data Science:
• Python expert level, Analytical, Different models works, Basic concepts, CPG(Domain).
• Statistical Models & Hypothesis , Testing
• Machine Learning Important
• Business Understanding, visualization in Python.
• Classification, clustering and regression
•
Mandatory Skills
• Data Science, Python, Machine Learning, Statistical Models, Classification, clustering and regression
Client based at Bangalore location.
Data Scientist with LLM and Healthcare Expertise
Keywords: Data Scientist, LLM, Radiology, Healthcare, Machine Learning, Deep Learning, AI, Python, TensorFlow, PyTorch, Scikit-learn, Data Analysis, Medical Imaging, Clinical Data, HIPAA, FDA.
Responsibilities:
· Develop and deploy advanced machine learning models, particularly focusing on Large Language Models (LLMs) and their application in the healthcare domain.
· Leverage your expertise in radiology, visual images, and text data to extract meaningful insights and drive data-driven decision-making.
· Collaborate with cross-functional teams to identify and address complex healthcare challenges.
· Conduct research and explore new techniques to enhance the performance and efficiency of our AI models.
· Stay up-to-date with the latest advancements in machine learning and healthcare technology.
Qualifications:
· Bachelor's or Master's degree in Computer Science, Data Science, Statistics, or a related field.
· 6+ years of hands-on experience in data science and machine learning.
· Strong proficiency in Python and popular data science libraries (e.g., TensorFlow, PyTorch, Scikit-learn).
· Deep understanding of LLM architectures, training methodologies, and applications.
· Expertise in working with radiology images, visual data, and text data.
· Experience in the healthcare domain, particularly in areas such as medical imaging, clinical data analysis, or patient outcomes.
· Excellent problem-solving, analytical, and communication skills.
Preferred Qualifications:
· PhD in Computer Science, Data Science, or a related field.
· Experience with cloud platforms (e.g., AWS, GCP, Azure).
· Knowledge of healthcare standards and regulations (e.g., HIPAA, FDA).
· Publications in relevant academic journals or conferences.
Company: Optimum Solutions
About the company: Optimum solutions is a leader in a sheet metal industry, provides sheet metal solutions to sheet metal fabricators with a proven track record of reliable product delivery. Starting from tools through software, machines, we are one stop shop for all your technology needs.
Role Overview:
- Creating and managing database schemas that represent and support business processes, Hands-on experience in any SQL queries and Database server wrt managing deployment.
- Implementing automated testing platforms, unit tests, and CICD Pipeline
- Proficient understanding of code versioning tools, such as GitHub, Bitbucket, ADO
- Understanding of container platform, such as Docker
Job Description
- We are looking for a good Python Developer with Knowledge of Machine learning and deep learning framework.
- Your primary focus will be working the Product and Usecase delivery team to do various prompting for different Gen-AI use cases
- You will be responsible for prompting and building use case Pipelines
- Perform the Evaluation of all the Gen-AI features and Usecase pipeline
Position: AI ML Engineer
Location: Chennai (Preference) and Bangalore
Minimum Qualification: Bachelor's degree in computer science, Software Engineering, Data Science, or a related field.
Experience: 4-6 years
CTC: 16.5 - 17 LPA
Employment Type: Full Time
Key Responsibilities:
- Take care of entire prompt life cycle like prompt design, prompt template creation, prompt tuning/optimization for various Gen-AI base models
- Design and develop prompts suiting project needs
- Lead and manage team of prompt engineers
- Stakeholder management across business and domains as required for the projects
- Evaluating base models and benchmarking performance
- Implement prompt gaurdrails to prevent attacks like prompt injection, jail braking and prompt leaking
- Develop, deploy and maintain auto prompt solutions
- Design and implement minimum design standards for every use case involving prompt engineering
Skills and Qualifications
- Strong proficiency with Python, DJANGO framework and REGEX
- Good understanding of Machine learning framework Pytorch and Tensorflow
- Knowledge of Generative AI and RAG Pipeline
- Good in microservice design pattern and developing scalable application.
- Ability to build and consume REST API
- Fine tune and perform code optimization for better performance.
- Strong understanding on OOP and design thinking
- Understanding the nature of asynchronous programming and its quirks and workarounds
- Good understanding of server-side templating languages
- Understanding accessibility and security compliance, user authentication and authorization between multiple systems, servers, and environments
- Integration of APIs, multiple data sources and databases into one system
- Good knowledge in API Gateways and proxies, such as WSO2, KONG, nginx, Apache HTTP Server.
- Understanding fundamental design principles behind a scalable and distributed application
- Good working knowledge on Microservices architecture, behaviour, dependencies, scalability etc.
- Experience in deploying on Cloud platform like Azure or AWS
- Familiar and working experience with DevOps tools like Azure DEVOPS, Ansible, Jenkins, Terraform
at Cargill Business Services
Job Purpose and Impact:
The Sr. Generative AI Engineer will architect, design and develop new and existing GenAI solutions for the organization. As a Generative AI Engineer, you will be responsible for developing and implementing products using cutting-edge generative AI and RAG to solve complex problems and drive innovation across our organization. You will work closely with data scientists, software engineers, and product managers to design, build, and deploy AI-powered solutions that enhance our products and services in Cargill. You will bring order to ambiguous scenarios and apply in depth and broad knowledge of architectural, engineering and security practices to ensure your solutions are scalable, resilient and robust and will share knowledge on modern practices and technologies to the shared engineering community.
Key Accountabilities:
• Apply software and AI engineering patterns and principles to design, develop, test, integrate, maintain and troubleshoot complex and varied Generative AI software solutions and incorporate security practices in newly developed and maintained applications.
• Collaborate with cross-functional teams to define AI project requirements and objectives, ensuring alignment with overall business goals.
• Conduct research to stay up-to-date with the latest advancements in generative AI, machine learning, and deep learning techniques and identify opportunities to integrate them into our products and services, optimizing existing generative AI models and RAG for improved performance, scalability, and efficiency, developing and maintaining pipelines and RAG solutions including data preprocessing, prompt engineering, benchmarking and fine-tuning.
• Develop clear and concise documentation, including technical specifications, user guides and presentations, to communicate complex AI concepts to both technical and non-technical stakeholders.
• Participate in the engineering community by maintaining and sharing relevant technical approaches and modern skills in AI.
• Contribute to the establishment of best practices and standards for generative AI development within the organization.
• Independently handle complex issues with minimal supervision, while escalating only the most complex issues to appropriate staff.
Minimum Qualifications:
• Bachelor’s degree in a related field or equivalent experience
• Minimum of five years of related work experience
• You are proficient in Python and have experience with machine learning libraries and frameworks
• Have deep understanding of industry leading Foundation Model capabilities and its application.
• You are familiar with cloud-based Generative AI platforms and services
• Full stack software engineering experience to build products using Foundation Models
• Confirmed experience architecting applications, databases, services or integrations.
Internship Opportunity at REDHILL SOFTEC
Position: Software Intern
Duration: 3 Months
Domains: Machine Learning or Full Stack Web Development
Working Hours: 10 AM - 6 PM
About REDHILL SOFTEC:
REDHILL SOFTEC is a dynamic and innovative tech company committed to fostering talent and driving technological advancements. We are excited to announce an exclusive internship opportunity for MCA students passionate about Machine Learning or Full Stack Web Development.
Internship Details:
Duration: The internship spans 3 months, offering a deep dive into either Machine Learning or Full Stack Web Development.
Domains:
Machine Learning: Work on cutting-edge ML projects, learning and applying various algorithms and data analysis techniques.
Full Stack Web Development: Gain hands-on experience in both front-end and back-end development, working with the latest web technologies.
Stipend: This is an unpaid internship designed to provide valuable industry experience and skill development.
Working Hours: Interns are expected to work from 10 AM to 6 PM, ensuring a full-time immersive experience.
What We Offer:
Hands-on Experience: Engage in real-world projects that enhance your technical skills and industry knowledge.
Mentorship: Learn from experienced professionals who will guide and support you throughout the internship.
Skill Development: Acquire practical skills in Machine Learning or Full Stack Web Development, making you industry-ready.
Networking: Connect with industry experts and like-minded peers, building a strong professional network.
Who Should Apply:
MCA/BE/BCA students with a keen interest in Machine Learning or Full Stack Web Development.
Individuals looking to gain practical industry experience and enhance their technical skills.
Self-motivated learners eager to work on real-world projects and solve challenging problems.
How to Apply:
Interested candidates can apply by visiting our website and completing the registration process. Remember, spots are limited, and early applications are encouraged to secure your place in this enriching program.
Join REDHILL SOFTEC and take a significant step towards a successful career in technology!
REDHILL SOFTEC
Vijayanagar Bangalore
www.redhillsoftec.com
We look forward to welcoming passionate and driven interns to our team!
Sizzle is an exciting new startup that’s changing the world of gaming. At Sizzle, we’re building AI to automate gaming highlights, directly from Twitch and YouTube streams. We’re looking for a superstar Python expert to help develop and deploy our AI pipeline. The main task will be deploying models and algorithms developed by our AI team, and keeping the daily production pipeline running. Our pipeline is centered around several microservices, all written in Python, that coordinate their actions through a database. We’re looking for developers with deep experience in Python including profiling and improving the performance of production code, multiprocessing / multithreading, and managing a pipeline that is constantly running. AI/ML experience is a plus, but not necessary. AWS / docker / CI/CD practices are also a plus. If you are a gamer or streamer, or enjoy watching video games and streams, that is also definitely a plus :-)
You will be responsible for:
- Building Python scripts to deploy our AI components into pipeline and production
- Developing logic to ensure multiple different AI components work together seamlessly through a microservices architecture
- Managing our daily pipeline on both on-premise servers and AWS
- Working closely with the AI engineering, backend and frontend teams
You should have the following qualities:
- Deep expertise in Python including:
- Multiprocessing / multithreaded applications
- Class-based inheritance and modules
- DB integration including pymongo and sqlalchemy (we have MongoDB and PostgreSQL databases on our backend)
- Understanding Python performance bottlenecks, and how to profile and improve the performance of production code including:
- Optimal multithreading / multiprocessing strategies
- Memory bottlenecks and other bottlenecks encountered with large datasets and use of numpy / opencv / image processing
- Experience in creating soft real-time processing tasks is a plus
- Expertise in Docker-based virtualization including:
- Creating & maintaining custom Docker images
- Deployment of Docker images on cloud and on-premise services
- Experience with maintaining cloud applications in AWS environments
- Experience in deploying machine learning algorithms into production (e.g. PyTorch, tensorflow, opencv, etc) is a plus
- Experience with image processing in python is a plus (e.g. openCV, Pillow, etc)
- Experience with running Nvidia GPU / CUDA-based tasks is a plus (Nvidia Triton, MLFlow)
- Knowledge of video file formats (mp4, mov, avi, etc.), encoding, compression, and using ffmpeg to perform common video processing tasks is a plus.
- Excited about working in a fast-changing startup environment
- Willingness to learn rapidly on the job, try different things, and deliver results
- Ideally a gamer or someone interested in watching gaming content online
Seniority: We are looking for a mid to senior level engineer
Salary: Will be commensurate with experience.
Who Should Apply:
If you have the right experience, regardless of your seniority, please apply.
Work Experience: 4 years to 8 years
About Sizzle
Sizzle is building AI to automate gaming highlights, directly from Twitch and YouTube videos. Sizzle works with thousands of gaming streamers to automatically create highlights and social content for them. Sizzle is available at www.sizzle.gg.
at Tiger Analytics
• Charting learning journeys with knowledge graphs.
• Predicting memory decay based upon an advanced cognitive model.
• Ensure content quality via study behavior anomaly detection.
• Recommend tags using NLP for complex knowledge.
• Auto-associate concept maps from loosely structured data.
• Predict knowledge mastery.
• Search query personalization.
Requirements:
• 6+ years experience in AI/ML with end-to-end implementation.
• Excellent communication and interpersonal skills.
• Expertise in SageMaker, TensorFlow, MXNet, or equivalent.
• Expertise with databases (e. g. NoSQL, Graph).
• Expertise with backend engineering (e. g. AWS Lambda, Node.js ).
• Passionate about solving problems in education
Job Title:- Head of Analytics
Job Location:- Bangalore - On - site
About Qrata:
Qrata matches top talent with global career opportunities from the world’s leading digital companies including some of the world’s fastest growing start-ups using qrata’s talent marketplaces. To sign-up please visit Qrata Talent Sign-Up
We are currently scouting for Head of Analytics
Our Client Story:
Founded by a team of seasoned bankers with over 120 years of collective experience in banking, financial services and cards, encompassing strategy, operation, marketing, risk & technology, both in India and internationally.
We offer credit card processing Solution that can help you in effectively managing your credit card portfolio end-to-end. These solution are customized to meet the unique strategic, operational and compliance requirements of each bank.
1. Card Programs built for Everyone Limit assignment based on customer risk assessment & credit profiles including secured cards
2. Cards that can be used Everywhere. Through POS machines, UPI, E-Commerce websites
3. A Card for Everything Enable customer purchases, both large and small
4. Customized Card configurations Restrict usage based on merchant codes, location, amount limits etc
5. End-to-End Support We undertake the complete customer life cycle management right from KYC checks, onboarding, risk profiling, fraud control, billing and collections
6. Rewards Program Management We will manage the entire cards reward and customer loyalty programs for you
What you will do:
We are seeking an experienced individual for the role of Head of Analytics. As the Head of Analytics, you will be responsible for driving data-driven decision-making, implementing advanced analytics strategies, and providing valuable insights to optimize our credit card business operations, sales and marketing, risk management & customer experience. Your expertise in statistical analysis, predictive modelling, and data visualization will be instrumental in driving growth and enhancing the overall performance of our credit card business
Qualification:
- Bachelor's or master’s degree in Technology, Mathematics, Statistics, Economics, Computer Science, or a related field
- Proven experience (7+ years) in leading analytics teams in the credit card industry
- Strong expertise in statistical analysis, predictive modelling, data mining, and segmentation techniques
- Proficiency in data manipulation and analysis using programming languages such as Python, R, or SQL
- Experience with analytics tools such as SAS, SPSS, or Tableau
- Excellent leadership and team management skills, with a track record of building and developing high-performing teams
- Strong knowledge of credit card business and understanding of credit card industry dynamics, including risk management, marketing, and customer lifecycle
- Exceptional communication and presentation skills, with the ability to effectively communicate complex information to a varied audience
What you can expect:
1. Develop and implement Analytics Strategy:
o Define the analytics roadmap for the credit card business, aligning it with overall business objectives
o Identify key performance indicators (KPIs) and metrics to track the performance of the credit card business
o Collaborate with senior management and cross-functional teams to prioritize and execute analytics initiatives
2. Lead Data Analysis and Insights:
o Conduct in-depth analysis of credit card data, customer behaviour, and market trends to identify opportunities for business growth and risk mitigation
o Develop predictive models and algorithms to assess credit risk, customer segmentation, acquisition, retention, and upsell opportunities
o Generate actionable insights and recommendations based on data analysis to optimize credit card product offerings, pricing, and marketing strategies
o Regularly present findings and recommendations to senior leadership, using data visualization techniques to effectively communicate complex information
3. Drive Data Governance and Quality:
o Oversee data governance initiatives, ensuring data accuracy, consistency, and integrity across relevant systems and platforms
o Collaborate with IT teams to optimize data collection, integration, and storage processes to support advanced analytics capabilities
o Establish and enforce data privacy and security protocols to comply with regulatory requirements
4. Team Leadership and Collaboration:
o Build and manage a high-performing analytics team, fostering a culture of innovation, collaboration, and continuous learning
o Provide guidance and mentorship to the team, promoting professional growth and development
o Collaborate with stakeholders across departments, including Marketing, Risk Management, and Finance, to align analytics initiatives with business objectives
5. Stay Updated on Industry Trends:
o Keep abreast of emerging trends, techniques, and technologies in analytics, credit card business, and the financial industry
o Leverage industry best practices to drive innovation and continuous improvement in analytics methodologies and tools
For more Opportunities Visit: Qrata Opportunities.
Job Summary:
We are seeking a highly skilled Enterprise Architect with expertise in Artificial Intelligence (AI), Microservices, and a background in insurance and healthcare to lead our organization's AI strategy, design AI solutions, and ensure alignment with business objectives. The ideal candidate will have a deep understanding of AI technologies, data analytics, cloud computing, software architecture, microservices, as well as experience in the insurance and healthcare sectors. They should be able to translate these concepts into practical solutions that drive innovation and efficiency within our enterprise. Additionally, this role will involve setting up monitoring systems to ensure the performance and reliability of our AI and microservices solutions.
Responsibilities:
AI Strategy Development:
- Collaborate with senior management to define and refine the AI strategy that aligns with the organization's goals and objectives.
- Identify opportunities to leverage AI and machine learning technologies to enhance business processes and create value.
Solution Design:
- Architect AI-driven solutions that meet business requirements, ensuring scalability, reliability, and security.
- Collaborate with cross-functional teams to define system architecture and design principles for AI applications and microservices.
- Evaluate and select appropriate AI technologies, microservices architectures, and frameworks for specific projects.
Data Management:
- Oversee data strategy, including data acquisition, preparation, and governance, to support AI and microservices initiatives.
- Design data pipelines and workflows to ensure high-quality, accessible data for AI models and microservices.
AI Model Development:
- Lead the development and deployment of AI and machine learning models.
- Implement best practices for model training, testing, and validation.
- Monitor and optimize model performance to ensure accuracy and efficiency.
Microservices Architecture:
- Define and implement microservices architecture patterns and best practices.
- Ensure that microservices are designed for scalability, flexibility, and resilience.
- Collaborate with development teams to build and deploy microservices-based applications.
Monitoring Systems:
- Set up monitoring systems for AI and microservices solutions to ensure performance, reliability, and security.
- Implement proactive alerting and reporting mechanisms to identify and address issues promptly.
Integration and Deployment:
- Work with IT teams to integrate AI solutions and microservices into existing systems and applications.
- Ensure seamless deployment and monitoring of AI and microservices solutions in production environments.
Compliance and Security:
- Ensure that AI solutions, microservices, and monitoring systems comply with relevant regulations and data privacy standards.
- Implement security measures to protect AI models, data, and microservices.
Stakeholder Communication:
- Collaborate with business stakeholders to gather requirements and provide regular updates on AI and microservices project progress.
- Translate technical concepts into non-technical language for various audiences.
Research and Innovation:
- Stay up-to-date with emerging AI, microservices, and cloud computing trends, technologies, and best practices.
- Identify opportunities for innovation and propose new AI and microservices initiatives to drive business growth.
Requirements:
- Bachelor's or Masters degree in computer science, Data Science, or a related field.
- Proven experience (X+ years) as an Enterprise Architect with a focus on AI, Microservices, and machine learning.
- Strong knowledge of AI technologies, including deep learning, natural language processing, computer vision, and reinforcement learning.
- Proficiency in data analytics, cloud computing platforms (e.g., AWS, Azure, GCP), big data technologies, and microservices architecture.
- Experience with AI model development, deployment, and monitoring, as well as microservices design and implementation.
- Excellent communication and interpersonal skills.
- Ability to lead cross-functional teams and drive innovation.
- Strong problem-solving and critical-thinking abilities.
- Knowledge of regulatory requirements and data privacy standards related to AI and microservices.
Preferred Qualifications:
- AI-related certifications (e.g., AWS Certified Machine Learning – Specialty, Google Cloud Professional Machine Learning Engineer, etc.).
- Experience in industries such as insurance and healthcare, with a deep understanding of their specific challenges and requirements.
- Previous experience with enterprise architecture frameworks (e.g., TOGAF).
This Enterprise Architect with AI, Microservices, Insurance, and Healthcare Experience role offers an exciting opportunity to shape the AI strategy, microservices architecture, and drive innovation within our organization, particularly in the insurance and healthcare sectors. Additionally, you will play a key role in setting up monitoring systems to ensure the performance and reliability of our AI and microservices solutions. If you have a passion for AI, microservices, and a strong background in these industries, we encourage you to apply and be part of our dynamic team.
Are you passionate about pushing the boundaries of Artificial Intelligence and its applications in the software development lifecycle? Are you excited about building AI models that can revolutionize how developers ship, refactor, and onboard to legacy or existing applications faster? If so, Zevo.ai has the perfect opportunity for you!
As an AI Researcher/Engineer at Zevo.ai, you will play a crucial role in developing cutting-edge AI models using CodeBERT and codexGLUE to achieve our goal of providing an AI solution that supports developers throughout the sprint cycle. You will be at the forefront of research and development, harnessing the power of Natural Language Processing (NLP) and Machine Learning (ML) to revolutionize the way software development is approached.
Responsibilities:
- AI Model Development: Design, implement, and refine AI models utilizing CodeBERT and codexGLUE to comprehend codebases, facilitate code understanding, automate code refactoring, and enhance the developer onboarding process.
- Research and Innovation: Stay up-to-date with the latest advancements in NLP and ML research, identifying novel techniques and methodologies that can be applied to Zevo.ai's AI solution. Conduct experiments, perform data analysis, and propose innovative approaches to enhance model performance.
- Data Collection and Preparation: Collaborate with data engineers to identify, collect, and preprocess relevant datasets necessary for training and evaluating AI models. Ensure data quality, correctness, and proper documentation.
- Model Evaluation and Optimization: Develop robust evaluation metrics to measure the performance of AI models accurately. Continuously optimize and fine-tune models to achieve state-of-the-art results.
- Code Integration and Deployment: Work closely with software developers to integrate AI models seamlessly into Zevo.ai's platform. Ensure smooth deployment and monitor the performance of the deployed models.
- Collaboration and Teamwork: Collaborate effectively with cross-functional teams, including data scientists, software engineers, and product managers, to align AI research efforts with overall company objectives.
- Documentation: Maintain detailed and clear documentation of research findings, methodologies, and model implementations to facilitate knowledge sharing and future developments.
- Ethics and Compliance**: Ensure compliance with ethical guidelines and legal requirements related to AI model development, data privacy, and security.
Requirements
- Educational Background: Bachelor's/Master's or Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, or a related field. A strong academic record with a focus on NLP and ML is highly desirable.
- Technical Expertise: Proficiency in NLP, Deep Learning, and experience with AI model development using frameworks like PyTorch or TensorFlow. Familiarity with CodeBERT and codexGLUE is a significant advantage.
- Programming Skills: Strong programming skills in Python and experience working with large-scale software projects.
- Research Experience: Proven track record of conducting research in NLP, ML, or related fields, demonstrated through publications, conference papers, or open-source contributions.
- Problem-Solving Abilities: Ability to identify and tackle complex problems related to AI model development and software engineering.
- Team Player: Excellent communication and interpersonal skills, with the ability to collaborate effectively in a team-oriented environment.
- Passion for AI: Demonstrated enthusiasm for AI and its potential to transform software development practices.
If you are eager to be at the forefront of AI research, driving innovation and impacting the software development industry, join Zevo.ai's talented team of experts as an AI Researcher/Engineer. Together, we'll shape the future of the sprint cycle and revolutionize how developers approach code understanding, refactoring, and onboarding!
We are seeking a talented and motivated AI Verification Engineer to join our team. The ideal candidate will be responsible for the validation of our AI and Machine Learning systems, ensuring that they meet all necessary quality assurance requirements and work reliably and optimally in real-world scenarios. The role requires strong analytical skills, a good understanding of AI and ML technologies, and a dedication to achieving excellence in the production of state-of-the-art systems.
Key Responsibilities:
- Develop and execute validation strategies and test plans for AI and ML systems, during development and on production environments.
- Work closely with AI/ML engineers and data scientists in understanding system requirements and capabilities and coming up with key metrics for system efficacy.
- Evaluate the system performance under various operating conditions, data variety, and scenarios.
- Perform functional, stress, system, and other testing types to ensure our systems' reliability and robustness.
- Create automated test procedures and systems for regular verification and validation processes, and detect any abnormal anomalies in usage.
- Report and track defects, providing detailed information to facilitate problem resolution.
- Lead the continuous review and improvement of validation and testing methodologies, procedures, and tools.
- Provide detailed reports and documentation on system performance, issues, and validation results.
Required Skills & Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
- Proven experience in the testing and validation of AI/ML systems or equivalent complex systems.
- Good knowledge and understanding of AI and ML concepts, tools, and frameworks.
- Proficient in scripting and programming languages such as Python, shell scripts etc.
- Experience with AI/ML platforms and libraries such as TensorFlow, PyTorch, Keras, or Scikit-Learn.
- Excellent problem-solving abilities and attention to detail.
- Strong communication skills, with the ability to document and explain complex technical concepts clearly.
- Ability to work in a fast-paced, collaborative environment.
Preferred Skills & Qualifications:
- A good understanding of various large language models, image models, and their comparative strengths and weaknesses.
- Knowledge of CI/CD pipelines and experience with tools such as Jenkins, Git, Docker.
- Experience with cloud platforms like AWS, Google Cloud, or Azure.
- Understanding of Data Analysis and Visualization tools and techniques.
Credit Card processing solutions for banks & NBFCs
Role: Head of Analytics
Location: Bangalore (Full time)
ABOUT QRATA:
Qrata matches top talent with global career opportunities from the world’s leading digital companies including some of the world’s fastest growing startups using Qrata’s talent marketplaces. To sign-up, please visit Qrata Talent Sign-Up
ABOUT THE COMPANY WE ARE HIRING FOR:
Our client is offering credit card solutions for banks and financial institutions. It provides services like credit card design and onboarding, credit card authorization, payment processing, collections and dispute resolutions, credit card fraud detection, and more. They serve in the B2B space in the FinTech market segments.
POSITION OVERVIEW
We are seeking an experienced individual for the role of Head of Analytics. As the Head of Analytics, you will be responsible for driving data-driven decision-making, implementing advanced analytics strategies, and providing valuable insights to optimize our credit card business operations, sales and marketing, risk management & customer experience. Your expertise in statistical analysis, predictive modeling, and data visualization will be instrumental in driving growth and enhancing the overall performance of our credit card business.
Responsibilities:
1. Develop and implement Analytics Strategy:
o Define the analytics roadmap for the credit card business, aligning it with overall
business objectives.
o Identify key performance indicators (KPIs) and metrics to track the performance
of the credit card business.
o Collaborate with senior management and cross-functional teams to prioritize and
execute analytics initiatives. 2. Lead Data Analysis and Insights:
o Conduct in-depth analysis of credit card data, customer behavior, and market trends to identify opportunities for business growth and risk mitigation.
o Develop predictive models and algorithms to assess credit risk, customer segmentation, acquisition, retention, and upsell opportunities.
o Generate actionable insights and recommendations based on data analysis to optimize credit card product offerings, pricing, and marketing strategies.
o Regularly present findings and recommendations to senior leadership, using data visualization techniques to effectively communicate complex information.
3. Drive Data Governance and Quality:
o Oversee data governance initiatives, ensuring data accuracy, consistency, and
integrity across relevant systems and platforms.
o Collaborate with IT teams to optimize data collection, integration, and storage
processes to support advanced analytics capabilities.
o Establish and enforce data privacy and security protocols to comply with
regulatory requirements.
4. Team Leadership and Collaboration:
o Build and manage a high-performing analytics team, fostering a culture of innovation, collaboration, and continuous learning.
o Provide guidance and mentorship to the team, promoting professional growth and development.
o Collaborate with stakeholders across departments, including Marketing, Risk Management, and Finance, to align analytics initiatives with business objectives.
5. Stay Updated on Industry Trends:
o Keep abreast of emerging trends, techniques, and technologies in analytics, credit
card business, and the financial industry.
o Leverage industry best practices to drive innovation and continuous improvement
in analytics methodologies and tools.
Qualifications:
Bachelor's or master’s degree in Technology, Mathematics, Statistics, Economics, Computer Science, or a related field.
Proven experience (7+ years) in leading analytics teams in the credit card industry.
Strong expertise in statistical analysis, predictive modelling, data mining, and segmentation techniques.
Proficiency in data manipulation and analysis using programming languages such as Python, R, or SQL.
Experience with analytics tools such as SAS, SPSS, or Tableau.
Excellent leadership and team management skills, with a track record of building and developing high-performing teams.
Strong knowledge of credit card business and understanding of credit card industry dynamics, including risk management, marketing, and customer lifecycle.
Exceptional communication and presentation skills, with the ability to effectively communicate complex information to a varied audience.
About Kloud9:
Kloud9 exists with the sole purpose of providing cloud expertise to the retail industry. Our team of cloud architects, engineers and developers help retailers launch a successful cloud initiative so you can quickly realise the benefits of cloud technology. Our standardised, proven cloud adoption methodologies reduce the cloud adoption time and effort so you can directly benefit from lower migration costs.
Kloud9 was founded with the vision of bridging the gap between E-commerce and cloud. The E-commerce of any industry is limiting and poses a huge challenge in terms of the finances spent on physical data structures.
At Kloud9, we know migrating to the cloud is the single most significant technology shift your company faces today. We are your trusted advisors in transformation and are determined to build a deep partnership along the way. Our cloud and retail experts will ease your transition to the cloud.
Our sole focus is to provide cloud expertise to retail industry giving our clients the empowerment that will take their business to the next level. Our team of proficient architects, engineers and developers have been designing, building and implementing solutions for retailers for an average of more than 20 years.
We are a cloud vendor that is both platform and technology independent. Our vendor independence not just provides us with a unique perspective into the cloud market but also ensures that we deliver the cloud solutions available that best meet our clients' requirements.
Responsibilities:
● Studying, transforming, and converting data science prototypes
● Deploying models to production
● Training and retraining models as needed
● Analyzing the ML algorithms that could be used to solve a given problem and ranking them by their respective scores
● Analyzing the errors of the model and designing strategies to overcome them
● Identifying differences in data distribution that could affect model performance in real-world situations
● Performing statistical analysis and using results to improve models
● Supervising the data acquisition process if more data is needed
● Defining data augmentation pipelines
● Defining the pre-processing or feature engineering to be done on a given dataset
● To extend and enrich existing ML frameworks and libraries
● Understanding when the findings can be applied to business decisions
● Documenting machine learning processes
Basic requirements:
● 4+ years of IT experience in which at least 2+ years of relevant experience primarily in converting data science prototypes and deploying models to production
● Proficiency with Python and machine learning libraries such as scikit-learn, matplotlib, seaborn and pandas
● Knowledge of Big Data frameworks like Hadoop, Spark, Pig, Hive, Flume, etc
● Experience in working with ML frameworks like TensorFlow, Keras, OpenCV
● Strong written and verbal communications
● Excellent interpersonal and collaboration skills.
● Expertise in visualizing and manipulating big datasets
● Familiarity with Linux
● Ability to select hardware to run an ML model with the required latency
● Robust data modelling and data architecture skills.
● Advanced degree in Computer Science/Math/Statistics or a related discipline.
● Advanced Math and Statistics skills (linear algebra, calculus, Bayesian statistics, mean, median, variance, etc.)
Nice to have
● Familiarity with Java, and R code writing.
● Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world
● Verifying data quality, and/or ensuring it via data cleaning
● Supervising the data acquisition process if more data is needed
● Finding available datasets online that could be used for training
Why Explore a Career at Kloud9:
With job opportunities in prime locations of US, London, Poland and Bengaluru, we help build your career paths in cutting edge technologies of AI, Machine Learning and Data Science. Be part of an inclusive and diverse workforce that's changing the face of retail technology with their creativity and innovative solutions. Our vested interest in our employees translates to deliver the best products and solutions to our customers.
Igeeks is India's no.1 internship platform with 12000+ internships in Engineering, MBA, Commerce & Management, and other streams. Igeeks is offering industrial internships for B.E /Engineering Students based on Embedded Systems / IOT / Machine Learning in Python / Java / Big Data. Electronics and Communication / Computer Science Engineering / Information Science Engineering / Mechanical Engineering / Electrical and Electronics / Telecommunication Engineering Students can Apply For it. Duration of the internship will be 1 Month / 4 weeks. Limited Seats will be offered.
New Internship batch starts from 8th January 2024
Benefits of Internship :
- Work on projects.
- Exposure to work on current technologies
- Company certification
- Hands on experience
For more information refer official website here: www.igeekstechnologies.com
Data Scientist
Cubera is a data company revolutionizing big data analytics and Adtech through data share value principles wherein the users entrust their data to us. We refine the art of understanding, processing, extracting, and evaluating the data that is entrusted to us. We are a gateway for brands to increase their lead efficiency as the world moves towards web3.
What you’ll do?
- Build machine learning models, perform proof-of-concept, experiment, optimize, and deploy your models into production; work closely with software engineers to assist in productionizing your ML models.
- Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation, and serving.
- Research new and innovative machine learning approaches.
- Perform hands-on analysis and modeling of enormous data sets to develop insights that increase Ad Traffic and Campaign Efficacy.
- Collaborate with other data scientists, data engineers, product managers, and business stakeholders to build well-crafted, pragmatic data products.
- Actively take on new projects and constantly try to improve the existing models and infrastructure necessary for offline and online experimentation and iteration.
- Work with your team on ambiguous problem areas in existing or new ML initiatives
What are we looking for?
- Ability to write a SQL query to pull the data you need.
- Fluency in Python and familiarity with its scientific stack such as numpy, pandas, scikit learn, matplotlib.
- Experience in Tensorflow and/or R Modelling and/or PyTorch
- Ability to understand a business problem and translate, and structure it into a data science problem.
Job Category: Data Science
Job Type: Full Time
Job Location: Bangalore
Job Summary
As a Data Science Lead, you will manage multiple consulting projects of varying complexity and ensure on-time and on-budget delivery for clients. You will lead a team of data scientists and collaborate across cross-functional groups, while contributing to new business development, supporting strategic business decisions and maintaining & strengthening client base
- Work with team to define business requirements, come up with analytical solution and deliver the solution with specific focus on Big Picture to drive robustness of the solution
- Work with teams of smart collaborators. Be responsible for their appraisals and career development.
- Participate and lead executive presentations with client leadership stakeholders.
- Be part of an inclusive and open environment. A culture where making mistakes and learning from them is part of life
- See how your work contributes to building an organization and be able to drive Org level initiatives that will challenge and grow your capabilities.
Role & Responsibilities
- Serve as expert in Data Science, build framework to develop Production level DS/AI models.
- Apply AI research and ML models to accelerate business innovation and solve impactful business problems for our clients.
- Lead multiple teams across clients ensuring quality and timely outcomes on all projects.
- Lead and manage the onsite-offshore relation, at the same time adding value to the client.
- Partner with business and technical stakeholders to translate challenging business problems into state-of-the-art data science solutions.
- Build a winning team focused on client success. Help team members build lasting career in data science and create a constant learning/development environment.
- Present results, insights, and recommendations to senior management with an emphasis on the business impact.
- Build engaging rapport with client leadership through relevant conversations and genuine business recommendations that impact the growth and profitability of the organization.
- Lead or contribute to org level initiatives to build the Tredence of tomorrow.
Qualification & Experience
- Bachelor's /Master's /PhD degree in a quantitative field (CS, Machine learning, Mathematics, Statistics, Data Science) or equivalent experience.
- 6-10+ years of experience in data science, building hands-on ML models
- Expertise in ML – Regression, Classification, Clustering, Time Series Modeling, Graph Network, Recommender System, Bayesian modeling, Deep learning, Computer Vision, NLP/NLU, Reinforcement learning, Federated Learning, Meta Learning.
- Proficient in some or all of the following techniques: Linear & Logistic Regression, Decision Trees, Random Forests, K-Nearest Neighbors, Support Vector Machines ANOVA , Principal Component Analysis, Gradient Boosted Trees, ANN, CNN, RNN, Transformers.
- Knowledge of programming languages SQL, Python/ R, Spark.
- Expertise in ML frameworks and libraries (TensorFlow, Keras, PyTorch).
- Experience with cloud computing services (AWS, GCP or Azure)
- Expert in Statistical Modelling & Algorithms E.g. Hypothesis testing, Sample size estimation, A/B testing
- Knowledge in Mathematical programming – Linear Programming, Mixed Integer Programming etc , Stochastic Modelling – Markov chains, Monte Carlo, Stochastic Simulation, Queuing Models.
- Experience with Optimization Solvers (Gurobi, Cplex) and Algebraic programming Languages(PulP)
- Knowledge in GPU code optimization, Spark MLlib Optimization.
- Familiarity to deploy and monitor ML models in production, delivering data products to end-users.
- Experience with ML CI/CD pipelines.
THE IDEAL CANDIDATE WILL
- Engage with executive level stakeholders from client's team to translate business problems to high level solution approach
- Partner closely with practice, and technical teams to craft well-structured comprehensive proposals/ RFP responses clearly highlighting Tredence’s competitive strengths relevant to Client's selection criteria
- Actively explore the client’s business and formulate solution ideas that can improve process efficiency and cut cost, or achieve growth/revenue/profitability targets faster
- Work hands-on across various MLOps problems and provide thought leadership
- Grow and manage large teams with diverse skillsets
- Collaborate, coach, and learn with a growing team of experienced Machine Learning Engineers and Data Scientists
ELIGIBILITY CRITERIA
- BE/BTech/MTech (Specialization/courses in ML/DS)
- At-least 7+ years of Consulting services delivery experience
- Very strong problem-solving skills & work ethics
- Possesses strong analytical/logical thinking, storyboarding and executive communication skills
- 5+ years of experience in Python/R, SQL
- 5+ years of experience in NLP algorithms, Regression & Classification Modelling, Time Series Forecasting
- Hands on work experience in DevOps
- Should have good knowledge in different deployment type like PaaS, SaaS, IaaS
- Exposure on cloud technologies like Azure, AWS or GCP
- Knowledge in python and packages for data analysis (scikit-learn, scipy, numpy, pandas, matplotlib).
- Knowledge of Deep Learning frameworks: Keras, Tensorflow, PyTorch, etc
- Experience with one or more Container-ecosystem (Docker, Kubernetes)
- Experience in building orchestration pipeline to convert plain python models into a deployable API/RESTful endpoint.
- Good understanding of OOP & Data Structures concepts
Nice to Have:
- Exposure to deployment strategies like: Blue/Green, Canary, AB Testing, Multi-arm Bandit
- Experience in Helm is a plus
- Strong understanding of data infrastructure, data warehouse, or data engineering
You can expect to –
- Work with world’ biggest retailers and help them solve some of their most critical problems. Tredence is a preferred analytics vendor for some of the largest Retailers across the globe
- Create multi-million Dollar business opportunities by leveraging impact mindset, cutting edge solutions and industry best practices.
- Work in a diverse environment that keeps evolving
- Hone your entrepreneurial skills as you contribute to growth of the organization
2-5 yrs of proven experience in ML, DL, and preferably NLP.
Preferred Educational Background - B.E/B.Tech, M.S./M.Tech, Ph.D.
𝐖𝐡𝐚𝐭 𝐰𝐢𝐥𝐥 𝐲𝐨𝐮 𝐰𝐨𝐫𝐤 𝐨𝐧?
𝟏) Problem formulation and solution designing of ML/NLP applications across complex well-defined as well as open-ended healthcare problems.
2) Cutting-edge machine learning, data mining, and statistical techniques to analyse and utilise large-scale structured and unstructured clinical data.
3) End-to-end development of company proprietary AI engines - data collection, cleaning, data modelling, model training / testing, monitoring, and deployment.
4) Research and innovate novel ML algorithms and their applications suited to the problem at hand.
𝐖𝐡𝐚𝐭 𝐚𝐫𝐞 𝐰𝐞 𝐥𝐨𝐨𝐤𝐢𝐧𝐠 𝐟𝐨𝐫?
𝟏) Deeper understanding of business objectives and ability to formulate the problem as a Data Science problem.
𝟐) Solid expertise in knowledge graphs, graph neural nets, clustering, classification.
𝟑) Strong understanding of data normalization techniques, SVM, Random forest, data visualization techniques.
𝟒) Expertise in RNN, LSTM, and other neural network architectures.
𝟓) DL frameworks: Tensorflow, Pytorch, Keras
𝟔) High proficiency with standard database skills (e.g., SQL, MongoDB, Graph DB), data preparation, cleaning, and wrangling/munging.
𝟕) Comfortable with web scraping, extracting, manipulating, and analyzing complex, high-volume, high-dimensionality data from varying sources.
𝟖) Experience with deploying ML models on cloud platforms like AWS or Azure.
9) Familiarity with version control with GIT, BitBucket, SVN, or similar.
𝐖𝐡𝐲 𝐜𝐡𝐨𝐨𝐬𝐞 𝐮𝐬?
𝟏) We offer Competitive remuneration.
𝟐) We give opportunities to work on exciting and cutting-edge machine learning problems so you contribute towards transforming the healthcare industry.
𝟑) We offer flexibility to choose your tools, methods, and ways to collaborate.
𝟒) We always value and believe in new ideas and encourage creative thinking.
𝟓) We offer open culture where you will work closely with the founding team and have the chance to influence the product design and execution.
𝟔) And, of course, the thrill of being part of an early-stage startup, launching a product, and seeing it in the hands of the users.
Top Management Consulting Company
We are looking for a Machine Learning engineer for on of our premium client.
Experience: 2-9 years
Location: Gurgaon/Bangalore
Tech Stack:
Python, PySpark, the Python Scientific Stack; MLFlow, Grafana, Prometheus for machine learning pipeline management and monitoring; SQL, Airflow, Databricks, our own open-source data pipelining framework called Kedro, Dask/RAPIDS; Django, GraphQL and ReactJS for horizontal product development; container technologies such as Docker and Kubernetes, CircleCI/Jenkins for CI/CD, cloud solutions such as AWS, GCP, and Azure as well as Terraform and Cloudformation for deployment
1. ROLE AND RESPONSIBILITIES
1.1. Implement next generation intelligent data platform solutions that help build high performance distributed systems.
1.2. Proactively diagnose problems and envisage long term life of the product focusing on reusable, extensible components.
1.3. Ensure agile delivery processes.
1.4. Work collaboratively with stake holders including product and engineering teams.
1.5. Build best-practices in the engineering team.
2. PRIMARY SKILL REQUIRED
2.1. Having a 2-6 years of core software product development experience.
2.2. Experience of working with data-intensive projects, with a variety of technology stacks including different programming languages (Java,
Python, Scala)
2.3. Experience in building infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data
sources to support other teams to run pipelines/jobs/reports etc.
2.4. Experience in Open-source stack
2.5. Experiences of working with RDBMS databases, NoSQL Databases
2.6. Knowledge of enterprise data lakes, data analytics, reporting, in-memory data handling, etc.
2.7. Have core computer science academic background
2.8. Aspire to continue to pursue career in technical stream
3. Optional Skill Required:
3.1. Understanding of Big Data technologies and Machine learning/Deep learning
3.2. Understanding of diverse set of databases like MongoDB, Cassandra, Redshift, Postgres, etc.
3.3. Understanding of Cloud Platform: AWS, Azure, GCP, etc.
3.4. Experience in BFSI domain is a plus.
4. PREFERRED SKILLS
4.1. A Startup mentality: comfort with ambiguity, a willingness to test, learn and improve rapidl
Hi,
We are hiring for Data Scientist for Bangalore.
Req Skills:
- NLP
- ML programming
- Spark
- Model Deployment
- Experience processing unstructured data and building NLP models
- Experience with big data tools pyspark
- Pipeline orchestration using Airflow and model deployment experience is preferred
Contact Center software that leverages AI to improve custome
As a machine learning engineer on the team, you will
• Help science and product teams innovate in developing and improving end-to-end
solutions to machine learning-based security/privacy control
• Partner with scientists to brainstorm and create new ways to collect/curate data
• Design and build infrastructure critical to solving problems in privacy-preserving machine
learning
• Help team self-organize and follow machine learning best practice.
Basic Qualifications
• 4+ years of experience contributing to the architecture and design (architecture, design
patterns, reliability and scaling) of new and current systems
• 4+ years of programming experience with at least one modern language such as Java,
C++, or C# including object-oriented design
• 4+ years of professional software development experience
• 4+ years of experience as a mentor, tech lead OR leading an engineering team
• 4+ years of professional software development experience in Big Data and Machine
Learning Fields
• Knowledge of common ML frameworks such as Tensorflow, PyTorch
• Experience with cloud provider Machine Learning tools such as AWS SageMaker
• Programming experience with at least two modern language such as Python, Java, C++,
or C# including object-oriented design
• 3+ years of experience contributing to the architecture and design (architecture, design
patterns, reliability and scaling) of new and current systems
• Experience in python
• BS in Computer Science or equivalent
DATA SCIENTIST-MACHINE LEARNING
GormalOne LLP. Mumbai IN
Job Description
GormalOne is a social impact Agri tech enterprise focused on farmer-centric projects. Our vision is to make farming highly profitable for the smallest farmer, thereby ensuring India's “Nutrition security”. Our mission is driven by the use of advanced technology. Our technology will be highly user-friendly, for the majority of farmers, who are digitally naive. We are looking for people, who are keen to use their skills to transform farmers' lives. You will join a highly energized and competent team that is working on advanced global technologies such as OCR, facial recognition, and AI-led disease prediction amongst others.
GormalOne is looking for a machine learning engineer to join. This collaborative yet dynamic, role is suited for candidates who enjoy the challenge of building, testing, and deploying end-to-end ML pipelines and incorporating ML Ops best practices across different technology stacks supporting a variety of use cases. We seek candidates who are curious not only about furthering their own knowledge of ML Ops best practices through hands-on experience but can simultaneously help uplift the knowledge of their colleagues.
Location: Bangalore
Roles & Responsibilities
- Individual contributor
- Developing and maintaining an end-to-end data science project
- Deploying scalable applications on different platform
- Ability to analyze and enhance the efficiency of existing products
What are we looking for?
- 3 to 5 Years of experience as a Data Scientist
- Skilled in Data Analysis, EDA, Model Building, and Analysis.
- Basic coding skills in Python
- Decent knowledge of Statistics
- Creating pipelines for ETL and ML models.
- Experience in the operationalization of ML models
- Good exposure to Deep Learning, ANN, DNN, CNN, RNN, and LSTM.
- Hands-on experience in Keras, PyTorch or Tensorflow
Basic Qualifications
- Tech/BE in Computer Science or Information Technology
- Certification in AI, ML, or Data Science is preferred.
- Master/Ph.D. in a relevant field is preferred.
Preferred Requirements
- Exp in tools and packages like Tensorflow, MLFlow, Airflow
- Exp in object detection techniques like YOLO
- Exposure to cloud technologies
- Operationalization of ML models
- Good understanding and exposure to MLOps
Kindly note: Salary shall be commensurate with qualifications and experience
Top Management Consulting Company
We are looking out for a technically driven "ML OPS Engineer" for one of our premium client
COMPANY DESCRIPTION:
Key Skills
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
Job brief
We are looking for a Data Scientist to analyze large amounts of raw information to find patterns that will help improve our company. We will rely on you to build data products to extract valuable business insights.
In this role, you should be highly analytical with a knack for analysis, math and statistics. Critical thinking and problem-solving skills are essential for interpreting data. We also want to see a passion for machine-learning and research.
Your goal will be to help our company analyze trends to make better decisions.
Requirements
1. 2 to 4 years of relevant industry experience
2. Experience in Linear algebra, statistics & Probability skills, such as distributions, Deep Learning, Machine Learning
3. Strong mathematical and statistics background is a must
4. Experience in machine learning frameworks such as Tensorflow, Caffe, PyTorch, or MxNet
5. Strong industry experience in using design patterns, algorithms and data structures
6. Industry experience in using feature engineering, model performance tuning, and optimizing machine learning models
7. Hands on development experience in Python and packages such as NumPy, Sci-Kit Learn and Matplotlib
8. Experience in model building, hyper
Introduction
http://www.synapsica.com/">Synapsica is a https://yourstory.com/2021/06/funding-alert-synapsica-healthcare-ivycap-ventures-endiya-partners/">series-A funded HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don't have to rely on cryptic 2 liners given to them as a diagnosis.
Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting. We are backed by IvyCap, Endia Partners, YCombinator and other investors from India, US, and Japan. We are proud to have GE and The Spinal Kinetics as our partners. Here’s a small sample of what we’re building: https://www.youtube.com/watch?v=FR6a94Tqqls">https://www.youtube.com/watch?v=FR6a94Tqqls
Your Roles and Responsibilities
We are looking for an experienced MLOps Engineer to join our engineering team and help us create dynamic software applications for our clients. In this role, you will be a key member of a team in decision making, implementations, development and advancement of ML operations of the core AI platform.
Roles and Responsibilities:
- Work closely with a cross functional team to serve business goals and objectives.
- Develop, Implement and Manage MLOps in cloud infrastructure for data preparation,deployment, monitoring and retraining models
- Design and build application containerisation and orchestrate with Docker and Kubernetes in AWS platform.
- Build and maintain code, tools, packages in cloud
Requirements:
- At Least 2+ years of experience in Data engineering
- At Least 3+ yr experience in Python with familiarity in popular ML libraries.
- At Least 2+ years experience in model serving and pipelines
- Working knowledge of containers like kubernetes , dockers, in AWS
- Design distributed systems deployment at scale
- Hands-on experience in coding and scripting
- Ability to write effective scalable and modular code.
- Familiarity with Git workflows, CI CD and NoSQL Mongodb
- Familiarity with Airflow, DVC and MLflow is a plus
Introduction
Synapsica is a growth stage HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective, while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don’t have to rely on cryptic 2 liners given to them as diagnosis. Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting. We are backed by YCombinator and other investors from India, US and Japan. We are proud to have GE, AIIMS, and the Spinal Kinetics as our partners.
Your Roles and Responsibilities
The role involves computer vision tasks including development, customization and training of Convolutional Neural Networks (CNNs); application of ML techniques (SVM, regression, clustering etc.) and traditional Image Processing (OpenCV etc.). The role is research focused and would involve going through and implementing existing research papers, deep dive of problem analysis, generating new ideas, automating and optimizing key processes.
Requirements:
- Strong problem-solving ability
- Prior experience with Python, cuDNN, Tensorflow, PyTorch, Keras, Caffe (or similar Deep Learning frameworks).
- Extensive understanding of computer vision/image processing applications like object classification, segmentation, object detection etc
- Ability to write custom Convolutional Neural Network Architecture in Pytorch (or similar)
- Experience of GPU/DSP/other Multi-core architecture programming
- Effective communication with other project members and project stakeholders
- Detail-oriented, eager to learn, acquire new skills
- Prior Project Management and Team Leadership experience
- Ability to plan work and meet deadlines
- End to end deployment of deep learning models.
Job Description –Sr. Python Developer
Job Brief
The job requires Python experience as well as expertise with AI/ML. This Developer is expected to have strong technical skills, to work closely with the other team members in development and managing key projects. Ability to work on a small team with minimal supervision, Troubleshoot, test and maintain the core product software and databases to ensure strong optimization and functionality
Job Requirement
- 4 plus Years of Python relevant experience
- Good at communication skills and Email etiquette
- Quick learner and should be a team player
- Experience in working on python framework
- Experience in Developing With Python & MySQL on LAMP/LEMP Stack
- Experience in Developing an MVC Application with Python
- Experience with Threading, Multithreading and pipelines
- Experience in Creating RESTful API’s With Python in JSON, XMLs
- Experience in Designing Relational Database using MySQL And Writing Raw SQL Queries
- Experience with GitHub Version Control
- Ability of Write Custom Python Code
- Excellent working knowledge of AI/ML based application
- Experience in OpenCV/TensorFlow/ SimpleCV/PyTorch
- Experience working in agile software development methodology
- Understanding of end-to-end ML project lifecycle
- Understanding of cross platform OS systems like Windows, Linux or UNIX with hands-on working experience
Responsibilities
- Participate in the entire development lifecycle, from planning through implementation, documentation, testing, and deployment, all the way to monitoring.
- Produce high quality, maintainable code with great test coverage
- Integration of user-facing elements developed by front-end developers
- Build efficient, testable, and reusable Python/AI/ML modules
- Solve complex performance problems and architectural challenges
- Help with designing and architecting the product
- Design and develop the web application modules or APIs
- Troubleshoot and debug applications.
at Digitectura Technologies Private Limited
Require Someone skilled in python / C/C++ to work on new products and also support existing AI based products .
Should be open to learning new frameworks
at Synapsica Technologies Pvt Ltd
Introduction
http://www.synapsica.com/">Synapsica is a https://yourstory.com/2021/06/funding-alert-synapsica-healthcare-ivycap-ventures-endiya-partners/">series-A funded HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don't have to rely on cryptic 2 liners given to them as a diagnosis.
Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting. We are backed by IvyCap, Endia Partners, YCombinator and other investors from India, US, and Japan. We are proud to have GE and The Spinal Kinetics as our partners. Here’s a small sample of what we’re building: https://www.youtube.com/watch?v=FR6a94Tqqls">https://www.youtube.com/watch?v=FR6a94Tqqls
Your Roles and Responsibilities
Synapsica is looking for a Principal AI Researcher to lead and drive AI based research and development efforts. Ideal candidate should have extensive experience in Computer Vision and AI Research, either through studies or industrial R&D projects and should be excited to work on advanced exploratory research and development projects in computer vision and machine learning to create the next generation of advanced radiology solutions.
The role involves computer vision tasks including development customization and training of Convolutional Neural Networks (CNNs); application of ML techniques (SVM, regression, clustering etc.), and traditional Image Processing (OpenCV, etc.). The role is research-focused and would involve going through and implementing existing research papers, deep dive of problem analysis, frequent review of results, generating new ideas, building new models from scratch, publishing papers, automating and optimizing key processes. The role will span from real-world data handling to the most advanced methods such as transfer learning, generative models, reinforcement learning, etc., with a focus on understanding quickly and experimenting even faster. Suitable candidate will collaborate closely both with the medical research team, software developers and AI research scientists. The candidate must be creative, ask questions, and be comfortable challenging the status quo. The position is based in our Bangalore office.
Primary Responsibilities
- Interface between product managers and engineers to design, build, and deliver AI models and capabilities for our spine products.
- Formulate and design AI capabilities of our stack with special focus on computer vision.
- Strategize end-to-end model training flow including data annotation, model experiments, model optimizations, model deployment and relevant automations
- Lead teams, engineers, and scientists to envision and build new research capabilities and ensure delivery of our product roadmap.
- Organize regular reviews and discussions.
- Keep the team up-to-date with latest industrial and research updates.
- Publish research and clinical validation papers
Requirements
- 6+ years of relevant experience in solving complex real-world problems at scale using computer vision-based deep learning.
- Prior experience in leading and managing a team.
- Strong problem-solving ability
- Prior experience with Python, cuDNN, Tensorflow, PyTorch, Keras, Caffe (or similar Deep Learning frameworks).
- Extensive understanding of computer vision/image processing applications like object classification, segmentation, object detection etc
- Ability to write custom Convolutional Neural Network Architecture in Pytorch (or similar)
- Background in publishing research papers and/or patents
- Computer Vision and AI Research background in medical domain will be a plus
- Experience of GPU/DSP/other Multi-core architecture programming
- Effective communication with other project members and project stakeholders
- Detail-oriented, eager to learn, acquire new skills
- Prior Project Management and Team Leadership experience
- Ability to plan work and meet the deadline
Igeeks The One-Stop Learning Platform
Igeeks provides internship training on the latest cutting edge technologies in the industry for easy placements of students. We provide hands-on experience on our real time projects to expose the students on the real world challenges and industry standards of implementing a project. Igeeks will provide the internship certificate on successful completion of internship parameters. 18 Year of Experience in Educational Research & Training |Trusted by 50+ University/Institution Partners Pan India. Our Award Winning Tech Team have trained thousands of students and have guided over 5000+ working projects via Practical Research based Project training, out of which some of the projects have won best project awards at various national & international competitions and expos.
Benefits:
·Opportunity to learn under the professional and experienced software programmers.
·Gain the ability and credentials required to score the best software engineers jobs in the industry.
·Get the confidence and knowledge to handle all kinds of challenges in a real-time work environment.
·The main benefit to the student is enhancing employ-ability and increasing the industry readiness for the IT industry.
The domains in which we offer internships programs, which might interest you are as follows.
Computer Science Engineering (CS, IS, IT, MCA, BCA).
Electronics Engineering (ECE, EEE, TCE, IE, BIO, ME).
Mechanical Engineering.
Civil Engineering.
Management & Commerce (BBA, B.com, MBA).
For more information refer to the official website here: www.igeekstechnologies.com
Who is eligible to apply for Internship Training?
B.E/B.Tech, M.E/M.tech B.Sc, BCA, B.Com, BBA, MCA, MBA (Students belongs to 2nd Year Student / 3rd Year Student / 4th Year Student / Passed Out Student / Fresher’s/Job Seekers) can apply for this Internship. Training which provides the students to have real time exposure to Industry.
How to register?
Internship Application Form,Fill the Application: https://forms.gle/pGA8KDDiLSWxdTnf9
Program Details:
Learn now :https://bit.ly/2Rq39hq
Learn now :https://bit.ly/3iiLYte
Learn now:https://bit.ly/2Rlzshk
Learn now:https://bit.ly/3gcIZzB
New Internship Batch Starting from 25th August & 5th September 2024.
We request you to share the information to all the students to do their Internship at our prestigious organization in Bangalore
- Writing efficient, reusable, testable, and scalable code
- Understanding, analyzing, and implementing – Business needs, feature modification requests, conversion into software components
- Integration of user-oriented elements into different applications, data storage solutions
- Developing – Backend components to enhance performance and receptiveness, server-side logic, and platform, statistical learning models, highly responsive web applications
- Designing and implementing – High availability and low latency applications, data protection and security features
- Performance tuning and automation of application
- Working with Python libraries like Pandas, NumPy, etc.
- Creating predictive models for AI and ML-based features
- Keeping abreast with the latest technology and trends
- Fine-tune and develop AI/ML-based algorithms based on results
Technical Skills-
Good proficiency in,
- Python frameworks like Django, etc.
- Web frameworks and RESTful APIs
- Core Python fundamentals and programming
- Code packaging, release, and deployment
- Database knowledge
- Circles, conditional and control statements
- Object-relational mapping
- Code versioning tools like Git, Bitbucket
Fundamental understanding of,
- Front-end technologies like JS, CSS3 and HTML5
- AI, ML, Deep Learning, Version Control, Neural networking
- Data visualization, statistics, data analytics
- Design principles that are executable for a scalable app
- Creating predictive models
- Libraries like Tensorflow, Scikit-learn, etc
- Multi-process architecture
- Basic knowledge about Object Relational Mapper libraries
- Ability to integrate databases and various data sources into a unified system
- Basic knowledge about Object Relational Mapper libraries
- Ability to integrate databases and various data sources into a unified system
Job Title: Engineering Manager
Job Location: Chennai, Bangalore
Job Summary
The Engineering Org is looking for a proficient Engineering Manager to join a team that is building exciting
and futuristic Data Products at Condé Nast to enable both internal and external marketers to target
audiences in real time. As an Engineering Manager, you will drive the day-to-day execution of technical
and architectural decisions. EM will own engineering deliverables inclusive of solving dependencies
such as architecture, solutions, sequencing, and working with other engineering delivery teams.This role
is also responsible for driving innovation, prototyping, and recommending solutions. Above all, you will
influence how users interact with Conde Nast’s industry-leading journalism.
● Primary Responsibilities
● Manage a high performing team of Software and Data Engineers within the Data & ML
Engineering team part of Engineering Data Organization.
● Provide leadership and guidance to the team in Data Discovery, Data Ingestion, Transformation
and Storage
● Utilizing product mindset to build, scale and deploy holistic data products after successful
prototyping and drive their engineering implementation
● Provide technical coaching and lead direct reports and other members of adjacent support teams
to the highest level of performance..
● Evaluate performance of direct reports and offer career development guidance.
● Meeting hiring and retention targets of the team & building a high-performance culture
● Handle escalations from internal stakeholders and manage critical issues to resolution.
● Collaborate with Architects, Product Manager, Project Manager and other teams to deliver high
quality products.
● Identify recurring system and application issues and enable engineers to work with release teams,
infra teams, product development, vendors and other stakeholders in investigating and resolving
the cause.
● Required Skills
● 4+ years of managing Software Development teams, preferably in ML and Data
Engineering teams.
● 4+ years of Agile Software development practices
● 12+ years of Software Development experience.
● Excellent Problem Solving and System Design skill
● Hands on: Writing and Reviewing code primarily in Spark, Python and/or Java
● Hand on: Architect & Design end to end Data Pipeline (noSQL databases, Job Schedulers, Big
Data Development preferably on Databricks / Cloud)
● Experience with SOA & Microservice architecture
● Knowledge of Software Engineering best practices with experience on implementing CI/CD,
Log aggregation/Monitoring/alerting for production system
● Working Knowledge of cloud and devops skills (AWS will be preferred)
● Strong verbal and written communication skills.
● Experience in evaluating team member performance and offering career development
guidance.
● Experience in providing technical coaching to direct reports.
● Experience in architecting highly scalable products.
● Experience in collaborating with global stakeholder teams.
● Experience in working on highly available production systems.
● Strong knowledge of software release process and release pipeline.
About Condé Nast
CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right move to
invest heavily in understanding this data and formed a whole new Data team entirely dedicated to
data processing, engineering, analytics, and visualization. This team helps drive engagement, fuel
process innovation, further content enrichment, and increase market revenue. The Data team
aimed to create a company culture where data was the common language and facilitate an
environment where insights shared in real-time could improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The team at
Condé Nast Chennai works extensively with data to amplify its brands' digital capabilities and boost
online revenue. We are broadly divided into four groups, Data Intelligence, Data Engineering, Data
Science, and Operations (including Product and Marketing Ops, Client Services) along with Data
Strategy and monetization. The teams built capabilities and products to create data-driven solutions
for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse forms of
self-expression. At Condé Nast, we encourage the imaginative and celebrate the extraordinary. We
are a media company for the future, with a remarkable past. We are Condé Nast, and It Starts Here.
- Strong technical skills on UiPath, and end to end UiPath offerings
- Understanding of workflow-based logic and hands on experience with RE-framework.
- Excellent communication skill is a must
- Basic programming/Scripting knowledge on VBA/VB/java/C# script is must
- Knowledge in cognitive automation with OCR tools, AI/ML skills is a plus.
- Excellent software development background with any of the following (C#, C++, Java, .NET) is a plus
- Experience working on other RPA tools/vendors is a plus
- Experience in the field of Bot development
- UiPath RPA developer advanced certificate must.
Job Location: Chennai
Job Summary
The Engineering Org is looking for a proficient Engineering Manager to join a team that is building exciting
and futuristic Data Products at Condé Nast to enable both internal and external marketers to target
audiences in real time. As an Engineering Manager, you will drive the day-to-day execution of technical
and architectural decisions. EM will own engineering deliverables inclusive of solving dependencies
such as architecture, solutions, sequencing, and working with other engineering delivery teams.This role
is also responsible for driving innovation, prototyping, and recommending solutions. Above all, you will
influence how users interact with Conde Nast’s industry-leading journalism.
● Primary Responsibilities
● Manage a high performing team of Software and Data Engineers within the Data & ML
Engineering team part of Engineering Data Organization.
● Provide leadership and guidance to the team in Data Discovery, Data Ingestion, Transformation
and Storage
● Utilizing product mindset to build, scale and deploy holistic data products after successful
prototyping and drive their engineering implementation
● Provide technical coaching and lead direct reports and other members of adjacent support teams
to the highest level of performance..
● Evaluate performance of direct reports and offer career development guidance.
● Meeting hiring and retention targets of the team & building a high-performance culture
● Handle escalations from internal stakeholders and manage critical issues to resolution.
● Collaborate with Architects, Product Manager, Project Manager and other teams to deliver high
quality products.
● Identify recurring system and application issues and enable engineers to work with release teams,
infra teams, product development, vendors and other stakeholders in investigating and resolving
the cause.
● Required Skills
● 4+ years of managing Software Development teams, preferably in ML and Data
Engineering teams.
● 4+ years of Agile Software development practices
● 12+ years of Software Development experience.
● Excellent Problem Solving and System Design skill
● Hands on: Writing and Reviewing code primarily in Spark, Python and/or Java
● Hand on: Architect & Design end to end Data Pipeline (noSQL databases, Job Schedulers, Big
Data Development preferably on Databricks / Cloud)
● Experience with SOA & Microservice architecture
● Knowledge of Software Engineering best practices with experience on implementing CI/CD,
Log aggregation/Monitoring/alerting for production system
● Working Knowledge of cloud and devops skills (AWS will be preferred)
● Strong verbal and written communication skills.
● Experience in evaluating team member performance and offering career development
guidance.
● Experience in providing technical coaching to direct reports.
● Experience in architecting highly scalable products.
● Experience in collaborating with global stakeholder teams.
● Experience in working on highly available production systems.
● Strong knowledge of software release process and release pipeline.
About Condé Nast
CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right move to
invest heavily in understanding this data and formed a whole new Data team entirely dedicated to
data processing, engineering, analytics, and visualization. This team helps drive engagement, fuel
process innovation, further content enrichment, and increase market revenue. The Data team
aimed to create a company culture where data was the common language and facilitate an
environment where insights shared in real-time could improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The team at
Condé Nast Chennai works extensively with data to amplify its brands' digital capabilities and boost
online revenue. We are broadly divided into four groups, Data Intelligence, Data Engineering, Data
Science, and Operations (including Product and Marketing Ops, Client Services) along with Data
Strategy and monetization. The teams built capabilities and products to create data-driven solutions
for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse forms of
self-expression. At Condé Nast, we encourage the imaginative and celebrate the extraordinary. We
are a media company for the future, with a remarkable past. We are Condé Nast, and It Starts Here.
Sizzle is an exciting new startup in the world of gaming. At Sizzle, we’re building AI to automatically create highlights of gaming streamers and esports tournaments.
For this role, we're looking for someone that loves to play and watch games, and is eager to roll up their sleeves and build up a new gaming platform. Specifically, we’re looking for a technical program manager - someone that can drive timelines, manage dependencies and get things done. You will work closely with the founders and the engineering team to iterate and launch new products and features. You will constantly report on status and maintain a dashboard across product, engineering, and user behavior.
You will:
- Be responsible for speedy and timely shipping of all products and features
- Work closely with front end engineers, product managers, and UI/UX teams to understand the product requirements in detail, and map them out to delivery timeframes
- Work closely with backend engineers to understand and map deployment timeframes and integration into pipelines
- Manage the timeline and delivery of numerous A/B tests on the website design, layout, color scheme, button placement, images/videos, and other objects to optimize time on site and conversion
- Keep track of all dependencies between projects and engineers
- Track all projects and tasks across all engineers and address any delays. Ensure tight coordination with management.
You should have the following qualities:
- Strong track record of successful delivery of complex projects and product launches
- 2+ years of software development; 2+ years of program management
- Excellent verbal and communication skills
- Deep understanding of AI model development and deployment
- Excited about working in a fast-changing startup environment
- Willingness to learn rapidly on the job, try different things, and deliver results
- Bachelors or masters degree in computer science or related field
- Ideally a gamer or someone interested in watching gaming content online
Skills:
Technical program management, ML algorithms, Tensorflow, AWS, Python
Work Experience: 3 years to 10 years
Global SaaS product built to help revenue teams. (TP1)
- You'd have to set up your own shop, work with design customers to find generalizable use cases, and build them out.
- Ability to collaborate with cross-functional teams to build and ship new features
- At least 2-5 years of experience
- Predictive Analytics – Machine Learning Algorithms, Logistics & Linear Regression, Decision Tree, Clustering.
- Exploratory Data Analysis – Data Preparation, Data Exploration, and Data Visualization.
- Analytics Tools – R, Python, SQL, Power BI, MS Excel.
Understands the Customer Service. (CO1)
Should be highly Technical and hands-on experience in Artificial Intelligence and Machine learning and Python. Managing the successful delivery of projects by efficient planning and coordination.
KEY RESPONSIBILITIES OF THE POSITION :
- Create Technical Design for AI, Machine Learning, Deep Learning, NLP, NLU, NLG projects and implement the same in production.
- Solid understanding and experience of deep learning architectures and algorithms
- Working experience with AWS, most importantly AWS SageMaker, Aurora or MongoDB, Analytics and reporting.
- Experience solving problems in the industry using deep learning methods such as recurrent neural networks (RNN, LSTM), convolutional neural nets, auto-encoders, etc.
- Should have experience of 2-3 production implementations of machine learning projects.
- Knowledge of open-source libraries such as Keras, Tensor Flow, Pytorch
- Work with business analysts/consultants and other necessary teams to create a strong solution
- Should have in-depth understanding and experience of Data Science and Machine Learning projects using Python, R, etc. Skills in Java/C are a plus
- Should developing solutions using python in AI/ML projects
- Should be able to train and build a team of technical developers
- Desired to have experience as leads in designing and developing applications/tools using Microsoft technologies - ASP.Net, C#, HTML5, MVC
- Desired to have knowledge on any of the cloud solutions such as Azure or AWS
- Desired to have knowledge on any of container technology such as Docker
- Should be able to build strong relationships with project stakeholders
Keywords:
- Python
- Artificial Intelligence
- Machine Learning
- AWS
- Django
- NLP
About us
Skit (previously known as http://vernacular.ai/" target="_blank">Vernacular.ai) is an AI-first SaaS voice automation company. Its suite of speech and language solutions enable enterprises to automate their contact centre operations. With over 10 million hours of training data, its product - Vernacular Intelligent Voice Assistant (VIVA) can currently respond in 16+ languages, covering over 160+ dialects and replicating human-like conversations.
Skit currently serves a variety of enterprise clients across diverse sectors such as BFSI, F&B, Hospitality, Consumer Electronics and Travel & Tourism, including prominent clients like Axis Bank, Hathway, Porter and Barbeque Nation. It has been featured as one of the top-notch start-ups in the Cisco Launchpad’s Cohort 6 and is a part of the World Economic Forum’s Global Innovators Community. It has has also been listed in Forbes 30 Under 30 Asia start-ups 2021 for its remarkable industry innovation.
We are looking for ML Research Engineers to work on the following problems:
- Spoken Language Understanding and Dialog Management.
- Language semantics, parsing, and modeling across multiple languages.
- Speech Recognition, Speech Analytics and Voice Processing across multiple languages.
- Response Generation and Speech Synthesis.
- Active Learning, Monitoring and Observability mechanisms for deployments.
Responsibilities
- Design, build and evaluate Machine Learning solutions.
- Perform experiments and statistical analyses to draw conclusions and take modeling decisions.
- Study, implement and extend state of the art systems.
- Take part in regular research reviews and discussions.
- Build, maintain and extend our open source solutions in the domain.
- Write well-crafted programs at all levels of the system. This includes the data pipelines, experiment prototypes, fast and scalable deployment models, and evaluation, visualization and monitoring systems.
Requirements
- Practical Machine Learning experience as demonstrated by earlier works.
- Knowledge of and ability to use tools from theoretical and practical aspects of computer science. This includes, but is not limited to, probability, statistics, learning theory, algorithms, software architecture, programming languages, etc.
- Good programming skills and ability to work with programs at all levels of a finished Machine Learning product. We prefer language agnosticism since that exemplifies this point.
- Git portfolios and blogs are helpful as they let us better evaluate your work.
- What information we collect during our application and recruitment process and why we collect it;
- How we use that information; and
- How to access and update that information.
This policy covers the information you share with Skit (Cyllid Technologies Pvt. Ltd.) during the application or recruitment process including:
- Your name, address, email address, telephone number and other contact information;
- Your resume or CV, cover letter, previous and/or relevant work experience or other experience, education, transcripts, or other information you provide to us in support of an application and/or the application and recruitment process;
- Information from interviews and phone-screenings you may have, if any;
- Details of the type of employment you are or may be looking for, current and/or desired salary and other terms relating to compensation and benefits packages, willingness to relocate, or other job preferences;
- Details of how you heard about the position you are applying for;
- Reference information and/or information received from background checks (where applicable), including information provided by third parties;
- Information about your educational and professional background from publicly available sources, including online, that we believe is relevant to your application or a potential future application (e.g. your LinkedIn profile); and/or
- Information related to any assessment you may take as part of the interview screening process.
Your information will be used by Skit for the purposes of carrying out its application and recruitment process which includes:
- Assessing your skills, qualifications and interests against our career opportunities;
- Verifying your information and carrying out reference checks and/or conducting background checks (where applicable) if you are offered a job;
- Communications with you about the recruitment process and/or your application(s), including, in appropriate cases, informing you of other potential career opportunities at Skit;
- Creating and/or submitting reports as required under any local laws and/or regulations, where applicable;
- Making improvements to Skit's application and/or recruitment process including improving diversity in recruitment practices;
- Proactively conducting research about your educational and professional background and skills and contacting you if we think you would be suitable for a role with us.
Job Description
Do you have a passion for computer vision and deep learning problems? We are looking for someone who thrives on collaboration and wants to push the boundaries of what is possible today! Material Depot (materialdepot.in) is on a mission to be India’s largest tech company in the Architecture, Engineering and Construction space by democratizing the construction ecosystem and bringing stakeholders onto a common digital platform. Our engineering team is responsible for developing Computer Vision and Machine Learning tools to enable digitization across the construction ecosystem. The founding team includes people from top management consulting firms and top colleges in India (like BCG, IITB), and have worked extensively in the construction space globally and is funded by top Indian VCs.
Our team empowers Architectural and Design Businesses to effectively manage their day to day operations. We are seeking an experienced, talented Data Scientist to join our team. You’ll be bringing your talents and expertise to continue building and evolving our highly available and distributed platform.
Our solutions need complex problem solving in computer vision that require robust, efficient, well tested, and clean solutions. The ideal candidate will possess the self-motivation, curiosity, and initiative to achieve those goals. Analogously, the candidate is a lifelong learner who passionately seeks to improve themselves and the quality of their work. You will work together with similar minds in a unique team where your skills and expertise can be used to influence future user experiences that will be used by millions.
In this role, you will:
- Extensive knowledge in machine learning and deep learning techniques
- Solid background in image processing/computer vision
- Experience in building datasets for computer vision tasks
- Experience working with and creating data structures / architectures
- Proficiency in at least one major machine learning framework
- Experience visualizing data to stakeholders
- Ability to analyze and debug complex algorithms
- Good understanding and applied experience in classic 2D image processing and segmentation
- Robust semantic object detection under different lighting conditions
- Segmentation of non-rigid contours in challenging/low contrast scenarios
- Sub-pixel accurate refinement of contours and features
- Experience in image quality assessment
- Experience with in depth failure analysis of algorithms
- Highly skilled in at least one scripting language such as Python or Matlab and solid experience in C++
- Creativity and curiosity for solving highly complex problems
- Excellent communication and collaboration skills
- Mentor and support other technical team members in the organization
- Create, improve, and refine workflows and processes for delivering quality software on time and with carefully calculated debt
- Work closely with product managers, customer support representatives, and account executives to help the business move fast and efficiently through relentless automation.
How you will do this:
- You’re part of an agile, multidisciplinary team.
- You bring your own unique skill set to the table and collaborate with others to accomplish your team’s goals.
- You prioritize your work with the team and its product owner, weighing both the business and technical value of each task.
- You experiment, test, try, fail, and learn continuously.
- You don’t do things just because they were always done that way, you bring your experience and expertise with you and help the team make the best decisions.
For this role, you must have:
- Strong knowledge of and experience with the functional programming paradigm.
- Experience conducting code reviews, providing feedback to other engineers.
- Great communication skills and a proven ability to work as part of a tight-knit team.
Sr Product Manager / Lead Product Manager – Data Platform
Description:
At Amagi we are looking for a product leader to build world class big data and analytics platform to help our teams in making data driven decisions and to accelerate our business outcomes.
We are looking for someone who is innovative and experienced in end-to-end product management to drive our long-term data and analytics strategy.
The ideal candidate would be responsible for owning product roadmap and KPIs, driving product operational tasks to ensure configurable and scalable solutions.
Primary Responsibilities:
- Lead the product requirements to build real time, highly scalable, low latency data platform.
- Author PRD and define the strategic roadmap.
- Lead the Product design, MVPs and POCs and fast track deliveries
- Collaborate with various functions and design the most effective solutions.
- Understand customer needs and define the data solutions and insights to drive business outcomes
- Define key product performance metrices to drive business and customer outcomes
Basic Qualification
- 12+ years of overall SDLC experience with 7+ years of product management experience in fast paced company.
- Proven experience delivering large scale highly available big data processing systems.
- Knowledge of data pipeline design, data transformation and integration methodologies.
- Technically savvy with experience in big data systems like AWS, RedShift, Athena, Kafka and related technologies
- Demonstrate collaborative approach and ability to work with distributed, cross functional teams.
- Experience in taking products through full life cycle, from proposal to launch
- Strategic thinking capabilities to define the product roadmap and right prioritization of the backlog in line with the long-term vision
- Strong communication and stakeholder management skills with the ability to coordinate across a diverse group of technical and non-technical stakeholders.
- Ability to deal with ambiguity and use data to solve ambiguous problems
- Technical ability to understand and discuss software architecture, product integration, non-functional requirements etc. with the Engineering team.
Preferred Qualification:
- Technically savvy with good understanding of cloud application development.
- Software development experience building Enterprise platforms
- Experience in third-party vendor assessments
- Experience working with AI , ML , Bigdata and analytics tools
- Understanding of regulations such as data privacy, data security and governance