
About Near Learn Pvt Ltd
About
Connect with the team
Similar jobs

About the Role
We are looking for a highly skilled Data Scientist with strong expertise in Machine Learning, MLOps, and Generative AI. The ideal candidate will have hands-on experience in building scalable ML models, deploying them in production, and working with modern AI frameworks, including GenAI technologies.
Key Responsibilities
· Design, develop, and deploy machine learning models for real-world business problems
· Work on end-to-end ML lifecycle: data preprocessing, model building, evaluation, deployment, and monitoring
· Implement and manage MLOps pipelines for scalable and reproducible workflows
· Utilize tools like MLflow for experiment tracking, model versioning, and lifecycle management
· Develop and integrate Generative AI (GenAI) solutions such as LLM-based applications
· Collaborate with cross-functional teams (engineering, product, business) to translate requirements into AI solutions
· Optimize model performance and ensure production stability
· Stay updated with the latest advancements in AI/ML and GenAI ecosystems
Required Skills & Qualifications
· 4+ years of experience in Data Science / Machine Learning
· Strong programming skills in Python
· Hands-on experience with ML modeling techniques (supervised, unsupervised, NLP, etc.)
· Solid understanding of MLOps practices and tools
· Experience with MLflow or similar model lifecycle tools
· Practical experience in Generative AI (GenAI), including working with LLMs
· Experience with libraries/frameworks like Scikit-learn, TensorFlow, PyTorch
· Strong understanding of data structures, algorithms, and statistics
· Experience with cloud platforms (AWS/GCP/Azure) is a plus
Good to Have
· Experience with LLM fine-tuning, prompt engineering, or RAG pipelines
· Exposure to Docker, Kubernetes, and CI/CD pipelines
· Knowledge of data engineering workflows
· Design, develop, and implement AI/ML models and algorithms.
· Focus on building Proof of Concept (POC) applications to demonstrate the feasibility and value of AI solutions.
· Write clean, efficient, and well-documented code.
· Collaborate with data engineers to ensure data quality and availability for model training and evaluation.
· Work closely with senior team members to understand project requirements and contribute to technical solutions.
· Troubleshoot and debug AI/ML models and applications.
· Stay up-to-date with the latest advancements in AI/ML.
· Utilize machine learning frameworks (e.g., TensorFlow, PyTorch, Scikit-learn) to develop and deploy models.
· Develop and deploy AI solutions on Google Cloud Platform (GCP).
· Implement data preprocessing and feature engineering techniques using libraries like Pandas and NumPy.
· Utilize Vertex AI for model training, deployment, and management.
· Integrate and leverage Google Gemini for specific AI functionalities.
Qualifications:
· Bachelor’s degree in computer science, Artificial Intelligence, or a related field.
· 3+ years of experience in developing and implementing AI/ML models.
· Strong programming skills in Python.
· Experience with machine learning frameworks such as TensorFlow, PyTorch, or Scikit-learn.
· Good understanding of machine learning concepts and techniques.
· Ability to work independently and as part of a team.
· Strong problem-solving skills.
· Good communication skills.
· Experience with Google Cloud Platform (GCP) is preferred.
· Familiarity with Vertex AI is a plus.
Main tasks
- Supervision of the CI/CD process for the automated builds and deployments of web services and web applications as well as desktop tool in the cloud and container environment
- Responsibility of the operations part of a DevOps organization especially for development in the environment of container technology and orchestration, e.g. with Kubernetes
- Installation, operation and monitoring of web applications in cloud data centers for the purpose of development of the test as well as for the operation of an own productive cloud
- Implementation of installations of the solution especially in the container context
- Introduction, maintenance and improvement of installation solutions for development in the desktop and server environment as well as in the cloud and with on-premise Kubernetes
- Maintenance of the system installation documentation and implementation of trainings
Execution of internal software tests and support of involved teams and stakeholders
- Hands on Experience with Azure DevOps.
Qualification profile
- Bachelor’s or master’s degree in communications engineering, electrical engineering, physics or comparable qualification
- Experience in software
- Installation and administration of Linux and Windows systems including network and firewalling aspects
- Experience with build and deployment automation with tools like Jenkins, Gradle, Argo, AnangoDB or similar as well as system scripting (Bash, Power-Shell, etc.)
- Interest in operation and monitoring of applications in virtualized and containerized environments in cloud and on-premise
- Server environments, especially application, web-and database servers
- Knowledge in VMware/K3D/Rancer is an advantage
- Good spoken and written knowledge of English
Experience: 3+ years of experience in Cloud Architecture
About Company:
The company is a global leader in secure payments and trusted transactions. They are at the forefront of the digital revolution that is shaping new ways of paying, living, doing business and building relationships that pass on trust along the entire payments value chain, enabling sustainable economic growth. Their innovative solutions, rooted in a rock-solid technological base, are environmentally friendly, widely accessible and support social transformation.
Cloud Architect / Lead
- Role Overview
- Senior Engineer with a strong background and experience in cloud related technologies and architectures. Can design target cloud architectures to transform existing architectures together with the in-house team. Can actively hands-on configure and build cloud architectures and guide others.
- Key Knowledge
- 3-5+ years of experience in AWS/GCP or Azure technologies
- Is likely certified on one or more of the major cloud platforms
- Strong experience from hands-on work with technologies such as Terraform, K8S, Docker and orchestration of containers.
- Ability to guide and lead internal agile teams on cloud technology
- Background from the financial services industry or similar critical operational experience
Introduction
http://www.synapsica.com/">Synapsica is a https://yourstory.com/2021/06/funding-alert-synapsica-healthcare-ivycap-ventures-endiya-partners/">series-A funded HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don't have to rely on cryptic 2 liners given to them as a diagnosis.
Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting. We are backed by IvyCap, Endia Partners, YCombinator and other investors from India, US, and Japan. We are proud to have GE and The Spinal Kinetics as our partners. Here’s a small sample of what we’re building: https://www.youtube.com/watch?v=FR6a94Tqqls">https://www.youtube.com/watch?v=FR6a94Tqqls
Your Roles and Responsibilities
The Integration Software Manager is responsible for building products which integrate Synapsica AI based radiology software products with client side systems and horizontal app deployment platforms. The Integration Software Manager works with clients, product managers, internal software team and business development team to design and build products which allow Synapsica apps to integrate with external systems. The Integration Software Manager works with a team of engineers to build these products and owns the end-to-end delivery. The role requires understanding of various technologies and ability to quickly learn and execute development plans with new technologies. At Synapsica, we have used Javascript, React, Nodejs, Mongodb, Python, Dicom, HL7, AWS suite technologies to name a few.
This is a highly visible role working directly with founders and requires a mix of technical acumen and team leadership skills to drive the execution of the platform. This person must be creative, ask questions, and be comfortable challenging the status quo. The position is based in our Bangalore office.
Primary Responsibilities
- Work at a key intersection between customers, AI team, product engineering teams, and business development teams
- Partner cross-functionally with product managers, core platform engineers, and AI team to improve adoption of our applications
- Build modules that help onboard new customers onto our radiology platform
- Own end-to-end designing, documentation, development and delivery of softwares that enable clients to use our radiology products effectively
- Ensure analysis, efficiency, responsiveness, scalability and cross-platform compatibility of applications through captured metrics, testing frameworks, and debugging methodologies.
- Technical documentation through all stages of development
- Create design, develop modules, and coordinate efforts with the development team, working on application architectural implementation
- Collaborate with Product Analysts and Product Managers to estimate and plan work and provide status updates to stakeholders
- Create a close working relationship with business partners to identify the pain points and provide better experience to clients
- Establish strong relationships, and proactively communicate, with team members as well as individuals across the organisation
Requirements
- Degree in Computer Science or related discipline with 6-10 years of experience.
- Proficiency with server side languages such as Nodejs, Python, shell scripting
- Quick adoption of new technologies .
- Proficiency with at least one no-sql database such as MongoDB.
- Experience with platform components and REST APIs, to define platform interfaces and boundaries
- Experience creating a loosely coupled, services oriented design that can scale for large volumes of data
- Experience supporting extensibility, to plug new modules or services without requiring re-design
- Expertise in object oriented programming and applying OO principles patterns
- Good command over CI/CD processes.
- Excellent communication and collaboration skills with project members and stakeholders.
- Good problem solving skills.
- Very high sense of ownership.
- Deep interest and passion for technology
- Prior experience of leading software teams
- Ability to plan projects, execute them and meet the deadline
- Min. 3-4 years HANDS-ON EXPERIENCE below is mandatory. Altium Designer/Allegro (Preferred tools) Design of multilayer impedance-controlled PCBs
- Schematic symbol and Footprint Library Creation as per IPC standard.
- Hands-on experience with high speed and mixed-signal multilayer layout design.
- Component Placement and Routing plan preparation for design effectiveness
- Complex layout design with DFM and DFT Considerations.
- Gerber Generation and CAM Validation
Required Skill Sets:
- Knowledge of DFM & DFT
- Knowledge of layer stack-up build.
- Knowledge of EMI/EMC considerations for the successful layout design.
- Knowledge of IPC standards.
- In-depth knowledge of PCB fabrication process and Assembly.
- Knowledge in deciphering 2D/3D CAD DRAWINGS
- Willingness to learn new tools, technologies, and processes.
Qualification:
- Diploma/BE/B. Tech in Electronics or any other discipline.








