50+ Python Jobs in Bangalore (Bengaluru) | Python Job openings in Bangalore (Bengaluru)
Apply to 50+ Python Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.
Client based at Bangalore location.
Ab Initio Developer
About the Role:
We are seeking a skilled Ab Initio Developer to join our dynamic team and contribute to the development and maintenance of critical data integration solutions. As an Ab Initio Developer, you will be responsible for designing, developing, and implementing robust and efficient data pipelines using Ab Initio's powerful ETL capabilities.
Key Responsibilities:
· Design, develop, and implement complex data integration solutions using Ab Initio's graphical interface and command-line tools.
· Analyze complex data requirements and translate them into effective Ab Initio designs.
· Develop and maintain efficient data pipelines, including data extraction, transformation, and loading processes.
· Troubleshoot and resolve technical issues related to Ab Initio jobs and data flows.
· Optimize performance and scalability of Ab Initio jobs.
· Collaborate with business analysts, data analysts, and other team members to understand data requirements and deliver solutions that meet business needs.
· Stay up-to-date with the latest Ab Initio technologies and industry best practices.
Required Skills and Experience:
· 2.5 to 8 years of hands-on experience in Ab Initio development.
· Strong understanding of Ab Initio components, including Designer, Conductor, and Monitor.
· Proficiency in Ab Initio's graphical interface and command-line tools.
· Experience in data modeling, data warehousing, and ETL concepts.
· Strong SQL skills and experience with relational databases.
· Excellent problem-solving and analytical skills.
· Ability to work independently and as part of a team.
· Strong communication and documentation skills.
Preferred Skills:
· Experience with cloud-based data integration platforms.
· Knowledge of data quality and data governance concepts.
· Experience with scripting languages (e.g., Python, Shell scripting).
· Certification in Ab Initio or related technologies.
Responsibilities
· Design, develop, and deploy full-stack web applications using React.js and Python-Django.
· Build and optimize back-end systems, APIs, and database management solutions.
· Deploy and manage scalable and secure applications on AWS infrastructure.
· Perform code reviews, testing, and debugging to ensure high-quality, production-ready code.
· Collaborate with cross-functional teams to gather requirements and translate them into technical solutions.
· Take full ownership of assigned projects, ensuring timely and successful delivery.
· Implement and maintain CI/CD pipelines and DevOps practices.
Requirements
· Proven experience (2-4 years) with React.js and Python-Django in a professional setting.
· Strong expertise in full-stack development, including front-end, back-end, and database management.
· Proficiency with AWS services (e.g., EC2, S3,Fargate, Lambda) for deploying and managing applications.
· Experience with version control systems like Git.
· Solid understanding of best practices in software development, including Agile methodologies.
· Strong problem-solving skills, attention to detail, and ability to handle complex technical challenges.
· Excellent communication skills and a collaborative mindset.
ABOUT THE TEAM:
The production engineering team is responsible for the key operational pillars (Reliability, Observability, Elasticity, Security, and Governance) of the Cloud infrastructure at Swiggy. We thrive to excel & continuously improve on these key operational pillars. We design, build, and operate Swiggy’s cloud infrastructure and developer platforms, to provide a seamless experience to our internal and external consumers.
What qualities are we looking for:
10+ years of professional experience in infrastructure, production engineering
Strong design, debugging, and problem-solving skills
Proficiency in at least one programming language like Python, GoLang or Java.
B Tech/M Tech in Computer Science or equivalent from a reputed college.
Hands-on experience with AWS and Kubernetes or similar cloud/infrastructure platforms
Hands-on with DevOps principles and practices ( Everything-as-a-code, CI/CD, Test everything, proactive monitoring etc)
Deep understanding of OS/virtualization/Containerization, network protocols & concepts
Exposure to modern-day infrastructure technologies, expertise in building and operating distributed systems.
Hands-on coding on any of the languages like Python or GoLang.
Familiarity with software engineering practices including unit testing, code reviews, and design documentation.
Technically mentor and lead the team towards engineering and operational excellence
Act like an owner, strive for excellence.
What will you get to do here?
Be part of a Culture where Customer Obsession, Ownership, Teamwork, Bias for Action and Insist on High standards are a way of life
Coming up with the best practices to help the team achieve their technical tasks and continually thrive in improving the technology of the team
Be a hands-on engineer, ensure frameworks/infrastructure built is well designed, scalable & are of high quality.
Build and/or operate platforms that are highly available, elastic, scalable, operable and observable
Experiment with new & relevant technologies and tools, and drive adoption while measuring yourself on the impact you can create.
Implementation of long-term technology vision for the team.
Build/Adapt and implement tools that empower the Swiggy engineering teams to self-manage the infrastructure and services owned by them.
You will identify, articulate, and lead various long-term tech vision, strategies, cross-cutting initiatives and architecture redesigns.
Design systems and make decisions that will keep pace with the rapid growth of Swiggy. Document your work and decision-making processes, and lead presentations and discussions in a way that is easy for others to understand.
Creating architectures & designs for new solutions around existing/new areas. Decide technology & tool choices for the team.
· Designing and building data pipelines
Guiding customers on how to architect and build data engineering pipelines on Snowflake leveraging Snowpark, UDF etc for IOT Streaming analytics
· Developing code
Writing code in familiar programming languages, such as Python, and executing it within the Snowflake Data Cloud
· Data Science
Perform complex time series-based analyses for predictive and prescriptive maintenance and then later perform backpropagation based on Deep Learning (ANN, RNN etc) to uncover additional algorithms and embed them into the MLOPs pipeline
· Analyzing and testing software
Performing complex analysis, design, development, testing, and debugging of computer software
· Creating documentation
Creating repeatable processes and documentation as a result of customer engagement
· Developing best practices
Developing best practices, including ensuring knowledge transfer so that customers are properly enabled
· Working with stakeholders
Working with appropriate stakeholders to define system scope and objectives and establish baselines
· Querying and processing data
Querying and processing data with a DataFrame object
· Converting lambdas and functions
Converting custom lambdas and functions to user-defined functions (UDFs) that you can call to process data
· Writing stored procedures
Writing a stored procedure that you can call to process data, or automate with a task to build a data pipeline
Mammoth
Mammoth is a data management platform revolutionizing the way people work with data. Our lightweight, self-serve SaaS analytics solution takes care of data ingestion, storage, cleansing, transformation, and exploration, empowering users to derive insights with minimal friction. Based in London, with offices in Portugal and Bangalore, we offer flexibility in work locations, whether you prefer remote or in-person office interactions.
Job Responsibilities
- Collaboratively build robust, scalable, and maintainable software across the stack.
- Design, develop, and maintain APIs and web services.
- Dive into complex challenges around performance, scalability, concurrency, and deliver quality results.
- Improve code quality through writing unit tests, automation, and conducting code reviews.
- Work closely with the design team to understand user requirements, formulate use cases, and translate them into effective technical solutions.
- Contribute ideas and participate in brainstorming, design, and architecture discussions.
- Engage with modern tools and frameworks to enhance development processes.
Skills Required
Frontend:
- Proficiency in Vue.js and a solid understanding of JavaScript.
- Strong grasp of HTML/CSS, with attention to detail.
- Knowledge of TypeScript is a plus.
Backend:
- Strong proficiency in Python and mastery of at least one framework (Django, Flask, Pyramid, FastAPI or litestar).
- Experience with database systems such as PostgreSQL.
- Familiarity with performance trade-offs and best practices for backend systems.
General:
- Solid understanding of fundamental computer science concepts like algorithms, data structures, databases, operating systems, and programming languages.
- Experience in designing and building RESTful APIs and web services.
- Ability to collaborate effectively with cross-functional teams.
- Passion for solving challenging technical problems with innovative solutions.
- AI/ML or Willingness to learn on need basis.
Nice to have:
- Familiarity with DevOps tools like Docker, Kubernetes, and Ansible.
- Understanding of frontend build processes using tools like Vite.
- Demonstrated experience with end-to-end development projects or personal projects.
Job Perks
- Free lunches and stocked kitchen with fruits and juices.
- Game breaks to unwind.
- Work in a spacious and serene office located in Koramangala, Bengaluru.
- Opportunity to contribute to a groundbreaking platform with a passionate and talented team.
If you’re an enthusiastic developer who enjoys working across the stack, thrives on solving complex problems, and is eager to contribute to a fast-growing, mission-driven company, apply now!
Leverage your expertise in Python to design and implement distributed systems for web scraping, ensuring robust data extraction from diverse web sources.
• Develop and optimize scalable, self-healing scraping frameworks, integrated with AI tools for intelligent automation of the data collection process.
• Implement monitoring, logging, and alerting mechanisms to ensure high availability and performance of distributed web scraping systems.
• Work with large-scale NoSQL databases (e.g., MongoDB) to store and query scraped data efficiently.
• Collaborate with cross-functional teams to research and implement innovative AI-driven solutions for data extraction and automation.
• Ensure data integrity and security while interacting with various web sources.
Required Skills: • Extensive experience with Python and web frameworks like Flask, FastAPI, or Django.
• Experience with AI tools and machine learning libraries to enhance and automate scraping processes
. • Solid understanding of building and maintaining distributed systems, with hands-on experience in parallel programming (multithreading, asynchronous, multiprocessing).
• Working knowledge of asynchronous queue systems like Redis, Celery, RabbitMQ, etc., to handle distributed scraping tasks.
• Proven experience with web mining, scraping tools(e.g., Scrapy, BeautifulSoup, Selenium), and handling dynamic content.
• Proficiency in working with NoSQL data storage systems like MongoDB, including querying and handling large datasets.
• Knowledge of working with variousfront-end technologies and how various websites are built
- Advanced Linux / Unix support experience required.
- · Strong shell scripting and python programming skills for SRE related activities required.
- · Understanding on Veritas Cluster Service, Load Balancers, VMWare and Splunk required.
- · Knowledge on ITIL principles required.
- · Effective oral and written communication skills, and interpersonal skills to work well in a team environment required.
- · Strong organizational and coordination skills with the ability to manage multiple tasks and high-pressure situations for outage handling, management or resolution.
- · Be available for weekend work.
About Streamlyn
Streamlyn is a regional leader in adtech, specializing in enhancing monetization for Publishers through innovative and compelling ad products. Our ad tech engagement solutions suite empowers Publishers to elevate their business and monetize their content effectively. With a vast network of premium publisher partners across Asia, Streamlyn reaches over 100 million consumers monthly.
Job Overview
As the Tech Lead, you will play a critical role in defining the future of our advanced ad tech platform. This position requires full technical ownership of a high-impact project, where you’ll architect and implement a scalable, high-performance system while guiding a dedicated team of developers to advance our product roadmap.
In this role, you’ll balance strategic technical oversight with hands-on involvement. You will combine strategic technical leadership with hands-on involvement to oversee the platform’s scalability, reliability, and technical evolution. Success will depend on a combination of ad tech expertise, strategic vision, and strong leadership as you drive innovation and foster a high-performance engineering culture.
Roles and Responsibilities
- Independent Project Leadership: Lead a dedicated project stream within our ad tech platform, from initial architecture and design through to delivery. Take full accountability for your project’s technical vision, resource planning, and execution, ensuring it meets performance and scalability requirements.
- Strategic Technical Ownership: Define and implement a technical roadmap that aligns with Streamlyn’s goals, focusing on scalability, resilience, and innovation. Ensure project components are optimized for real-time bidding, ad serving, and data processing requirements unique to ad tech.
- Performance & Scalability: Architect a platform capable of processing billions of requests daily with minimal latency. Utilize advanced load balancing, caching, and database technologies to achieve optimal performance and reliability.
- Team Building & Mentorship: Lead a specialized team of developers, fostering an environment of technical excellence and innovation. Set high standards for code quality, maintainability, and performance while mentoring team members and encouraging growth.
- Product Innovation & Cross-Functional Collaboration: Work with product, operations, and sales teams to ensure the products and platform meet high standards and unique demands. Drive the development of cutting-edge features like custom ad units and contextual targeting to keep Streamlyn competitive.
- Ad Tech Expertise: Apply a deep understanding of ad tech—DSPs, SSPs, oRTB protocols, and header bidding frameworks—to ensure seamless integration and adherence to industry best practices.
- Team and Process Development: Drive agile methodologies and efficient development processes within your team to support responsive and high-quality delivery. Implement continuous improvement practices that optimize team productivity and platform reliability.
Qualifications
- Experience: 5+ years in software development with hands-on ad tech experience, ideally in building ad servers, header bidding solutions, oRTB (Open Real-Time Bidding) integrations, and data pipelines.
- Technical Proficiency -
- Languages: Advanced knowledge in Java and Python.
- Databases: Strong experience with MariaDB, PostgreSQL, Aerospike, and familiarity with other NoSQL technologies.
- Data Pipeline: Proven experience designing and implementing data pipelines for handling large-scale ad data in real-time, including ingestion, processing, and transformation.
- Infrastructure and Cloud: Proficiency with AWS Cloud services, including EMR, EKS, and Docker Containers. Experience with Apache Spark, Kafka, and MKS.
- Ad Tech Ecosystem Knowledge: Strong understanding of the ad tech landscape, including DSPs, SSPs, header bidding, programmatic advertising, and RTB protocols.
- Architectural Expertise: Advanced skills in designing distributed, microservices-based architectures and developing APIs for high-performance, low-latency applications. Advanced knowledge in load balancing, caching, and database technologies to enhance performance and scalability.
- System Monitoring & Optimization: Hands-on experience with monitoring and optimization tools, such as Prometheus, Grafana, or similar for system reliability.
- Data Engineering: Experience with ETL (Extract, Transform, Load) processes, data warehousing, and building data flows for analytical and reporting purposes.
- Agile and Project Management: Experience with Agile methodologies and the ability to prioritize and manage complex projects.
- Leadership Skills: Demonstrated ability to lead a team, with experience in mentorship, team building, and technical coaching. Comfortable taking ownership and accountable for team deliverables.
- Communication: Excellent communication and collaboration skills, with the ability to articulate complex technical issues and facilitate cross-functional team discussions.
About Us:
Optimo Capital is a newly established NBFC founded by Prashant Pitti, who is also a co-founder of EaseMyTrip (a billion-dollar listed startup that grew profitably without any funding).
Our mission is to serve the underserved MSME businesses with their credit needs in India. With less than 15% of MSMEs having access to formal credit, we aim to bridge this credit gap through a phygital model (physical branches + digital decision-making).
As a technology and data-first company, tech lovers and data enthusiasts play a crucial role in building the analytics & tech at Optimo that helps the company thrive.
What We Offer:
Join our dynamic startup team as a Senior Data Analyst and play a crucial role in core data analytics projects involving credit risk, lending strategy, credit underwriting features analytics, collections, and portfolio management. The analytics team at Optimo works closely with the Credit & Risk departments, helping them make data-backed decisions.
This is an exceptional opportunity to learn, grow, and make a significant impact in a fast-paced startup environment. We believe that the freedom and accountability to make decisions in analytics and technology bring out the best in you and help us build the best for the company. This environment offers you a steep learning curve and an opportunity to experience the direct impact of your analytics contributions. Along with this, we offer industry-standard compensation.
What We Look For:
We are looking for individuals with a strong analytical mindset and a fundamental understanding of the lending industry, primarily focused on credit risk. We value not only your skills but also your attitude and hunger to learn, grow, lead, and thrive, both individually and as part of a team. We encourage you to take on challenges, bring in new ideas, implement them, and build the best analytics systems. Your willingness to put in the extra hours to build the best will be recognized.
Skills/Requirements:
- Credit Risk & Underwriting: Fundamental knowledge of credit risk and underwriting processes is mandatory. Experience in any lending financial institution is a must. A thorough understanding of all the features evaluated in the underwriting process like credit report info, bank statements, GST data, demographics, etc., is essential.
- Analytics (Python): Excellent proficiency in Python - Pandas and Numpy. A strong analytical mindset and the ability to extract actionable insights from any analysis are crucial. The ability to convert the given problem statements into actionable analytics tasks and frame effective approaches to tackle them is highly desirable.
- Good to have but not mandatory: REST APIs: A fundamental understanding of APIs and previous experience or projects related to API development or integrations. Git: Proficiency in version control systems, particularly Git. Experience in collaborative projects using Git is highly valued.
What You'll Be Working On:
- Analyze data from different data sources, extract information, and create action items to tackle the given open-ended problems.
- Build strong analytics systems and dashboards that provide easy access to data and insights, including the current status of the company, portfolio health, static pool, branch-wise performance, TAT (turnaround time) monitoring, and more.
- Assist the credit and risk team with insights and action items, helping them make data-backed decisions and fine-tune the credit policy (high involvement in the credit and underwriting process).
- Work on different rule engines that automate the underwriting process end-to-end.
Other Requirements:
- Availability for full-time work in Bangalore. Immediate joiners are preferred.
- Strong passion for analytics and problem-solving.
- At least 1 year of industry experience in an analytics role, specifically in a lending institution, is a must.
- Self-motivated and capable of working both independently and collaboratively.
If you are ready to embark on an exciting journey of growth, learning, and innovation, apply now to join our pioneering team in Bangalore.
Greetings from Analyttica Datalab. We are currently seeking an Analytics Manager to join our team!
If you are looking for a place to grow your skills and share our values this can be the right opportunity for you.
- Role: Analytics Manager
- Relevant Experience: 7-9 Years
- Type of Role: Full-Time
- Notice Period:1 Month or Less (Immediate Joiners-Preferred)
- Mandatory Skills: Data Analytics, Marketing Analytics, A/B Testing, SQL, Logistic Regression, Segmentation, Customer Analytics, Fintech, Python
- Work Location: Bangalore
About Us
We are fast growing technology enabled advanced analytics and data science start-up. We have clients in USA and India and those two geographies are our current market focus. We are developing
an Artificial Intelligence (AI) embedded data science and analytics product platform (patented in US) with a series of offerings to solve real world problems. We have a world-class team of in-house
technocrats, external consultants, SMEs, data scientists, and experienced business professionals driving the product journey.
What Can You Expect?
Along with the commercial aspects of the engagement (no bar for the right candidate), you can expect an agile environment, with progressive culture and state of the art workplace. You will be presented with a great learning opportunity, while interacting with people from diverse background with the depth and breadth of experiences. Our technical, SME, and business leadership team comprises of each professional with 15+ years of experience and strong credibility to their names.
What Role You Are Expected To Play?
As a Product Analytics Manager focused on growth, you’ll be responsible for guiding the Product Analytics feature and experimentation roadmap which may include but is not limited to go-to-market strategy for partnerships and application of analytics for optimizing the acquisition funnel strategy for our client. Specifically,
you will be responsible for driving higher originations by converting web and app visitors to long-term customers.
Roles and Responsibilities:
- Ability to work hands-on on the top prioritized and strategic initiatives to drive business outcomes.
- Lead and mentor a team of sr. analysts and analysts, fostering a culture of collaboration and data-driven decision-making.
- Work closely with US stakeholders and manage expectations on project delivery, team management and capacity management.
- Develop and implement measurement strategies across marketing and product initiatives, ensuring data accuracy and integrity.
- Conduct in-depth analyses of marketing and product performance, identifying key trends, opportunities, and areas for improvement.
- Design and execute A/B tests to optimize marketing campaigns and product features.
- Prior experience in lifecycle marketing and product cross-selling is preferred.
- Develop and maintain dashboards and reports to track key metrics and provide actionable insights to stakeholders.
- Collaborate closely with cross-functional teams (marketing, product, engineering) to understand business needs and translate data into actionable recommendations.
- Stay up to date on industry trends and best practices in analytics and data management.
- Own the delivery in-take, time management and prioritization of the tasks for your team, leveraging JIRA
- Prior experience working for a US based Fintech would be preferred.
Essential Experience and Qualification:
- Minimum 7 years of experience in marketing and/or product analytics.
- Bachelor's degree in a quantitative field (e.g., Statistics, Mathematics, Computer Science, Economics). Master's degree is a plus.
- Proven expertise in SQL and data management.
- Hands-on experience with Python and advanced analytics techniques (e.g., regression analysis, classification techniques, predictive modeling).
- Hands-on experience with data visualization tools (e.g., Tableau, Looker).
- Hands-on exposure to Git & JIRA for project management.
- Strong understanding of A/B testing methodologies and statistical analysis.
- Excellent communication, presentation, and interpersonal skills.
- Ability to manage multiple projects and deadlines effectively.
- Coach and mentor team members on refining analytical approach, time management & prioritization.
- Prior experience working in a fast paced environment and working with US stakeholders
People & Culture
As a company, we believe in working hard and
letting our passion drive us to dream high and
achieve higher. We inspire innovation through
freedom of thought and creativity, and pursue
excellence while ensuring that our integrity
and ethical foundation remain strong.
Honesty, determination, and perseverance
form the core of our value system. Analyttica’s
versatile talent fulfils our value proposition by
taking on fresh challenges and delivering
solutions that blend skill and intuition (science
and art). Understanding the customer/partner
mindset helps us provide technology enabled
scientific solutions at scale for sustainable
business impact.
Industry
Client based at Bangalore location.
Job Title: Solution Architect
Work Location: Tokyo
Experience: 7-10 years
Number of Positions: 3
Job Description:
We are seeking a highly skilled Solution Architect to join our dynamic team in Tokyo. The ideal candidate will have substantial experience in designing, implementing, and deploying cutting-edge solutions involving Machine Learning (ML), Cloud Computing, Full Stack Development, and Kubernetes. The Solution Architect will play a key role in architecting and delivering innovative solutions that meet business objectives while leveraging advanced technologies and industry best practices.
Responsibilities:
- Collaborate with stakeholders to understand business needs and translate them into scalable and efficient technical solutions.
- Design and implement complex systems involving Machine Learning, Cloud Computing (at least two major clouds such as AWS, Azure, or Google Cloud), and Full Stack Development.
- Lead the design, development, and deployment of cloud-native applications with a focus on NoSQL databases, Python, and Kubernetes.
- Implement algorithms and provide scalable solutions, with a focus on performance optimization and system reliability.
- Review, validate, and improve architectures to ensure high scalability, flexibility, and cost-efficiency in cloud environments.
- Guide and mentor development teams, ensuring best practices are followed in coding, testing, and deployment.
- Contribute to the development of technical documentation and roadmaps.
- Stay up-to-date with emerging technologies and propose enhancements to the solution design process.
Key Skills & Requirements:
- Proven experience (7-10 years) as a Solution Architect or similar role, with deep expertise in Machine Learning, Cloud Architecture, and Full Stack Development.
- Expertise in at least two major cloud platforms (AWS, Azure, Google Cloud).
- Solid experience with Kubernetes for container orchestration and deployment.
- Strong hands-on experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB, etc.).
- Proficiency in Python, including experience with ML frameworks (such as TensorFlow, PyTorch, etc.) and libraries for algorithm development.
- Must have implemented at least two algorithms (e.g., classification, clustering, recommendation systems, etc.) in real-world applications.
- Strong experience in designing scalable architectures and applications from the ground up.
- Experience with DevOps and automation tools for CI/CD pipelines.
- Excellent problem-solving skills and ability to work in a fast-paced environment.
- Strong communication skills and ability to collaborate with cross-functional teams.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
Preferred Skills:
- Experience with microservices architecture and containerization.
- Knowledge of distributed systems and high-performance computing.
- Certifications in cloud platforms (AWS Certified Solutions Architect, Google Cloud Professional Cloud Architect, etc.).
- Familiarity with Agile methodologies and Scrum.
- Knowing Japanese Language is an additional advantage for the Candidate. Not mandatory.
Job Description:
We are seeking a skilled Quant Developer with strong programming expertise and an interest in financial markets. You will be responsible for designing, developing, and optimizing trading algorithms, analytics tools, and quantitative models. Prior experience with cryptocurrencies is a plus but not mandatory.
Key Responsibilities:
• Develop and implement high-performance trading algorithms and strategies.
• Collaborate with quantitative researchers to translate models into robust code.
• Optimize trading system performance and latency.
• Maintain and improve existing systems, ensuring reliability and scalability.
• Work with market data, conduct analysis, and support trading operations.
Required Skills:
• Strong proficiency in Python and/or C++.
• Solid understanding of data structures, algorithms, and object-oriented programming.
• Familiarity with financial markets or trading systems (experience in crypto is a bonus).
• Experience with distributed systems, databases, and performance optimization.
• Knowledge of numerical libraries (e.g., NumPy, pandas, Boost) and statistical analysis.
Preferred Qualifications:
• Experience in developing low-latency systems.
• Understanding of quantitative modeling and back testing frameworks.
• Familiarity with trading protocols (FIX, WebSocket, REST APIs).
• Interest or experience in cryptocurrencies and blockchain technologies.
Key Responsibilities:
- Designing, developing, and maintaining AI/NLP-based software solutions.
- Collaborating with cross-functional teams to define requirements and implement new features.
- Optimizing performance and scalability of existing systems.
- Conducting code reviews and providing constructive feedback to team members.
- Staying up-to-date with the latest developments in AI and NLP technologies.
Requirements:
- Bachelor's degree in Computer Science, Engineering, or related field.
- 6+ years of experience in software development, with a focus on Python programming.
- Strong understanding of artificial intelligence and natural language processing concepts.
- Hands-on experience with AI/NLP frameworks such as Llamaindex/Langchain, OpenAI, etc.
- Experience in implementing Retrieval-Augmented Generation (RAG) systems for enhanced AI solutions.
- Proficiency in building and deploying machine learning models.
- Excellent problem-solving skills and attention to detail.
- Strong communication and interpersonal skills.
Preferred Qualifications:
- Master's degree or higher in Computer Science, Engineering, or related field.
- Experience with cloud computing platforms such as AWS, Azure, or Google Cloud.
- Familiarity with big data technologies such as Hadoop, Spark, etc.
- Contributions to open-source projects related to AI/NLP
Senior Backend Developer
Job Overview: We are looking for a highly skilled and experienced Backend Developer who excels in building robust, scalable backend systems using multiple frameworks and languages. The ideal candidate will have 4+ years of experience working with at least two backend frameworks and be proficient in at least two programming languages such as Python, Node.js, or Go. As an Sr. Backend Developer, you will play a critical role in designing, developing, and maintaining backend services, ensuring seamless real-time communication with WebSockets, and optimizing system performance with tools like Redis, Celery, and Docker.
Key Responsibilities:
- Design, develop, and maintain backend systems using multiple frameworks and languages (Python, Node.js, Go).
- Build and integrate APIs, microservices, and other backend components.
- Implement real-time features using WebSockets and ensure efficient server-client communication.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Optimize backend systems for performance, scalability, and reliability.
- Troubleshoot and debug complex issues, providing efficient and scalable solutions.
- Work with caching systems like Redis to enhance performance and manage data.
- Utilize task queues and background job processing tools like Celery.
- Develop and deploy applications using containerization tools like Docker.
- Participate in code reviews and provide constructive feedback to ensure code quality.
- Mentor junior developers, sharing best practices and promoting a culture of continuous learning.
- Stay updated with the latest backend development trends and technologies to keep our solutions cutting-edge.
Required Qualifications:
- Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent work experience.
- 4+ years of professional experience as a Backend Developer.
- Proficiency in at least two programming languages: Python, Node.js, or Go.
- Experience working with multiple backend frameworks (e.g., Express, Flask, Gin, Fiber, FastAPI).
- Strong understanding of WebSockets and real-time communication.
- Hands-on experience with Redis for caching and data management.
- Familiarity with task queues like Celery for background job processing.
- Experience with Docker for containerizing applications and services.
- Strong knowledge of RESTful API design and implementation.
- Understanding of microservices architecture and distributed systems.
- Solid understanding of database technologies (SQL and NoSQL).
- Excellent problem-solving skills and attention to detail.
- Strong communication skills, both written and verbal.
Preferred Qualifications:
- Experience with cloud platforms such as AWS, Azure, or GCP.
- Familiarity with CI/CD pipelines and DevOps practices.
- Experience with GraphQL and other modern API paradigms.
- Familiarity with task queues, caching, or message brokers (e.g., Celery, Redis, RabbitMQ).
- Understanding of security best practices in backend development.
- Knowledge of automated testing frameworks for backend services.
- Familiarity with version control systems, particularly Git.
Building the machine learning production (or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. This position is an opportunity for an experienced, server-side developer to build expertise in this exciting new frontier. You will be part of a team deploying state-of-the-art AI solutions for Fractal clients.
Responsibilities
As MLOps Engineer, you will work collaboratively with Data Scientists and Data engineers to deploy and operate advanced analytics machine learning models. You’ll help automate and streamline Model development and Model operations. You’ll build and maintain tools for deployment, monitoring, and operations. You’ll also troubleshoot and resolve issues in development, testing, and production environments.
- Enable Model tracking, model experimentation, Model automation
- Develop ML pipelines to support
- Develop MLOps components in Machine learning development life cycle using Model Repository (either of): MLFlow, Kubeflow Model Registry
- Develop MLOps components in Machine learning development life cycle using Machine Learning Services (either of): Kubeflow, DataRobot, HopsWorks, Dataiku or any relevant ML E2E PaaS/SaaS
- Work across all phases of Model development life cycle to build MLOPS components
- Build the knowledge base required to deliver increasingly complex MLOPS projects on Azure
- Be an integral part of client business development and delivery engagements across multiple domains
Required Qualifications
- 3-5 years experience building production-quality software.
- B.E/B.Tech/M.Tech in Computer Science or related technical degree OR Equivalent
- Strong experience in System Integration, Application Development or Data Warehouse projects across technologies used in the enterprise space
- Knowledge of MLOps, machine learning and docker
- Object-oriented languages (e.g. Python, PySpark, Java, C#, C++)
- CI/CD experience( i.e. Jenkins, Git hub action,
- Database programming using any flavors of SQL
- Knowledge of Git for Source code management
- Ability to collaborate effectively with highly technical resources in a fast-paced environment
- Ability to solve complex challenges/problems and rapidly deliver innovative solutions
- Foundational Knowledge of Cloud Computing on Azure
- Hunger and passion for learning new skills
Building the machine learning production System(or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. This position is an opportunity for an experienced, server-side developer to build expertise in this exciting new frontier. You will be part of a team deploying state-ofthe-art AI solutions for Fractal clients.
Responsibilities
As MLOps Engineer, you will work collaboratively with Data Scientists and Data engineers to deploy and operate advanced analytics machine learning models. You’ll help automate and streamline Model development and Model operations. You’ll build and maintain tools for deployment, monitoring, and operations. You’ll also troubleshoot and resolve issues in development, testing, and production environments.
- Enable Model tracking, model experimentation, Model automation
- Develop scalable ML pipelines
- Develop MLOps components in Machine learning development life cycle using Model Repository (either of): MLFlow, Kubeflow Model Registry
- Machine Learning Services (either of): Kubeflow, DataRobot, HopsWorks, Dataiku or any relevant ML E2E PaaS/SaaS
- Work across all phases of Model development life cycle to build MLOPS components
- Build the knowledge base required to deliver increasingly complex MLOPS projects on Azure
- Be an integral part of client business development and delivery engagements across multiple domains
Required Qualifications
- 5.5-9 years experience building production-quality software
- B.E/B.Tech/M.Tech in Computer Science or related technical degree OR equivalent
- Strong experience in System Integration, Application Development or Datawarehouse projects across technologies used in the enterprise space
- Expertise in MLOps, machine learning and docker
- Object-oriented languages (e.g. Python, PySpark, Java, C#, C++)
- Experience developing CI/CD components for production ready ML pipeline.
- Database programming using any flavors of SQL
- Knowledge of Git for Source code management
- Ability to collaborate effectively with highly technical resources in a fast-paced environment
- Ability to solve complex challenges/problems and rapidly deliver innovative solutions
- Team handling, problem solving, project management and communication skills & creative thinking
- Foundational Knowledge of Cloud Computing on Azure
- Hunger and passion for learning new skills
Responsibilities
- Design and implement advanced solutions utilizing Large Language Models (LLMs).
- Demonstrate self-driven initiative by taking ownership and creating end-to-end solutions.
- Conduct research and stay informed about the latest developments in generative AI and LLMs.
- Develop and maintain code libraries, tools, and frameworks to support generative AI development.
- Participate in code reviews and contribute to maintaining high code quality standards.
- Engage in the entire software development lifecycle, from design and testing to deployment and maintenance.
- Collaborate closely with cross-functional teams to align messaging, contribute to roadmaps, and integrate software into different repositories for core system compatibility.
- Possess strong analytical and problem-solving skills.
- Demonstrate excellent communication skills and the ability to work effectively in a team environment.
Primary Skills
- Generative AI: Proficiency with SaaS LLMs, including Lang chain, llama index, vector databases, Prompt engineering (COT, TOT, ReAct, agents). Experience with Azure OpenAI, Google Vertex AI, AWS Bedrock for text/audio/image/video modalities.
- Familiarity with Open-source LLMs, including tools like TensorFlow/Pytorch and Huggingface. Techniques such as quantization, LLM finetuning using PEFT, RLHF, data annotation workflow, and GPU utilization.
- Cloud: Hands-on experience with cloud platforms such as Azure, AWS, and GCP. Cloud certification is preferred.
- Application Development: Proficiency in Python, Docker, FastAPI/Django/Flask, and Git.
- Natural Language Processing (NLP): Hands-on experience in use case classification, topic modeling, Q&A and chatbots, search, Document AI, summarization, and content generation.
- Computer Vision and Audio: Hands-on experience in image classification, object detection, segmentation, image generation, audio, and video analysis.
Data Scientist is responsible to discover the information hidden in vast amounts of data, and help us make smarter decisions to deliver better products. Your primary focus will be in applying Machine Learning and Generative AI techniques for data mining and statistical analysis, Text analytics using NLP/LLM and building high quality prediction systems integrated with our products. The ideal candidate should have a prior background in Generative AI, NLP (Natural Language Processing), and Computer Vision techniques. Additionally, experience in working with current state of the art Large Language Models (LLMs), and Computer Vision algorithms.
Job Responsibilities:
» Building models using best in AI/ML technology.
» Leveraging your expertise in Generative AI, Computer Vision, Python, Machine Learning, and Data Science to develop cutting-edge solutions for our products.
» Integrating NLP techniques, and utilizing LLM's in our products.
» Training/fine tuning models with new/modified training dataset.
» Selecting features, building and optimizing classifiers using machine learning techniques.
» Conducting data analysis, curation, preprocessing, modelling, and post-processing to drive data-driven decision-making.
» Enhancing data collection procedures to include information that is relevant for building analytic systems
» Working understanding of cloud platforms (AWS).
» Collaborating with cross-functional teams to design and implement advanced AI models and algorithms.
» Involving in R&D activities to explore the latest advancements in AI technologies, frameworks, and tools.
» Documenting project requirements, methodologies, and outcomes for stakeholders.
Technical skills
Mandatory
» Minimum of 5 years of experience as Machine Learning Researcher or Data Scientist.
» Master's degree or Ph.D. (preferable) in Computer Science, Data Science, or a related field.
» Should have knowledge and experience in working with Deep Learning projects using CNN, Transformers, Encoder and decoder architectures.
» Working experience with LLM's (Large Language Models) and their applications (For e.g., tuning embedding models, data curation, prompt engineering, LoRA, etc.).
» Familiarity with LLM Agents and related frameworks.
» Good programming skills in Python and experience with relevant libraries and frameworks (e.g., PyTorch, and TensorFlow).
» Good applied statistics skills, such as distributions, statistical testing, regression, etc.
» Excellent understanding of machine learning and computer vision based techniques and algorithms.
» Strong problem-solving abilities and a proactive attitude towards learning and adopting new technologies.
» Ability to work independently, manage multiple projects simultaneously, and collaborate effectively with diverse stakeholders.
Nice to have
» Exposure to financial research domain
» Experience with JIRA, Confluence
» Understanding of scrum and Agile methodologies
» Basic understanding of NoSQL databases, such as MongoDB, Cassandra
Experience with data visualization tools, such as Grafana, GGplot, etc.
Job Description for Data Engineer Role:-
Must have:
Experience working with Programming languages. Solid foundational and conceptual knowledge is expected.
Experience working with Databases and SQL optimizations
Experience as a team lead or a tech lead and is able to independently drive tech decisions and execution and motivate the team in ambiguous problem spaces.
Problem Solving, Judgement and Strategic decisioning skills to be able to drive the team forward.
Role and Responsibilities:
- Share your passion for staying on top of tech trends, experimenting with and learning new technologies, participating in internal & external technology communities, mentoring other members of the engineering community, and from time to time, be asked to code or evaluate code
- Collaborate with digital product managers, and leaders from other team to refine the strategic needs of the project
- Utilize programming languages like Java, Python, SQL, Node, Go, and Scala, Open Source RDBMS and NoSQL databases
- Defining best practices for data validation and automating as much as possible; aligning with the enterprise standards
Qualifications -
- Experience with SQL and NoSQL databases.
- Experience with cloud platforms, preferably AWS.
- Strong experience with data warehousing and data lake technologies (Snowflake)
- Expertise in data modelling
- Experience with ETL/LT tools and methodologies
- Experience working on real-time Data Streaming and Data Streaming platforms
- 2+ years of experience in at least one of the following: Java, Scala, Python, Go, or Node.js
- 2+ years working with SQL and NoSQL databases, data modeling and data management
- 2+ years of experience with AWS, GCP, Azure, or another cloud service.
We are seeking a talented UiPath Developer with experience in Python, SQL, Pandas, and NumPy to join our dynamic team. The ideal candidate will have hands-on experience developing RPA workflows using UiPath, along with the ability to automate processes through scripting, data manipulation, and database queries.
This role offers the opportunity to collaborate with cross-functional teams to streamline operations and build innovative automation solutions.
Key Responsibilities:
- Design, develop, and implement RPA workflows using UiPath.
- Build and maintain Python scripts to enhance automation capabilities.
- Utilize Pandas and NumPy for data extraction, manipulation, and transformation within automation processes.
- Write optimized SQL queries to interact with databases and support automation workflows.
Skills and Qualifications:
- 2 to 5 years of experience in UiPath development.
- Strong proficiency in Python and working knowledge of Pandas and NumPy.
- Good experience with SQL for database interactions.
- Ability to design scalable and maintainable RPA solutions using UiPath.
Job Overview:
We are seeking a skilled Senior Python Full Stack Developer with a strong background in application architecture and design. The ideal candidate will be proficient in Python, with extensive experience in web frameworks such as Django or Flask along with front-end technologies- React, JavaScript. You'll play a key role in designing scalable applications, collaborating with cross-functional teams, and leveraging cloud technologies.
Key Responsibilities:
- Backend Development:
- - Architect, develop, and maintain high-performance backend systems using Python or Golang.
- - Build and optimize APIs and microservices that power innovative, user-focused features.
- - Implement security and data protection measures that are scalable from day one.
- - Collaborate closely with DevOps to deploy and manage applications seamlessly in dynamic cloud environments.
- Frontend Development:
- - Work hand-in-hand with front-end developers to integrate and harmonize backend systems with React-based applications.
- - Contribute to the UI/UX design process, ensuring an intuitive, frictionless user experience that aligns with the startup’s vision.
- - Continuously optimize web applications to ensure they are fast, responsive, and scalable as the user base grows.
Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
- 7+ years of experience in backend development, with proficiency in Python and/or Golang.
- Strong experience in front-end technologies, particularly React.
- Familiarity with cloud platforms (AWS, GCP, or Azure) and containerization tools like Docker and Kubernetes.
- Knowledge of Apache Spark is highly preferred.
- Solid understanding of database technologies, both relational (e.g., PostgreSQL, MySQL) and NoSQL (e.g., MongoDB, Cassandra).
- Experience with CI/CD pipelines and automated testing frameworks.
- Excellent problem-solving skills and a proactive attitude toward tackling challenges
team excelled in providing top-notch business solutions to industries such as Ecommerce, Marketing, Banking and Finance, Insurance, Transport and many more. For a generation that is driven by data, Insights & decision making, we help businesses to make the best possible use of data and enable businesses to thrive in this competitive space. Our expertise spans across Data, Analytics and Engineering to name a few.
● Strong programming skills in languages such as Java, Python, or C#.
● Experience with test automation tools (e.g., Selenium,
Appium, TestNG and CI/CD tools (e.g., Jenkins, GitLab CI).
Technical & Behavioural Skills
● Strong understanding of software testing methodologies, including functional,regression, and performance testing.
● Proficiency in scripting languages and test automation frameworks.
● Strong problem-solving skills and attention to detail.
● Excellent communication and collaboration skills, with the ability to work effectively in a
fast - paced, agile development environment.
● Experience with API testing tools (e.g., Postman, REST Assured) and performance
testing tools (e.g., JMeter, LoadRunner) is a plus.
Key Responsibilities
Test Automation Development:
● Design, develop, and maintain automated test frameworks, test scripts, and test cases
for , regression, and performance testing.
● Integrate automated tests into the CI/CD pipeline to provide continuous feedback on
product quality.
● Collaborate with developers to create and maintain testable software components and
interfaces.
● Test Planning and Execution:
● Analyze product requirements and technical specifications to develop comprehensive
test plans and test cases.
● Execute automated and manual tests, analyze test results, and report defects and issues
to the development team.
● Ensure that test environments are properly configured and maintained.
at Indee
About Indee
Indee is among the leading providers of a proprietary platform for secure video distribution and streaming, used by some of the world’s largest media companies, including Netflix, Paramount Pictures, Disney, and over 1100 other companies, big and small. Indee has grown 5x in the last 3 years and is scaling up at a rapid rate.
About the role
We are seeking a highly skilled and experienced Automation Engineer to join our dynamic team. As an Automation Engineer, you will play a key role in designing, implementing, and maintaining our automation testing framework. The primary focus of this role will be on utilizing Selenium, Pytest, Allure reporting, Python Requests, and Boto3 for automation testing and infrastructure management.
Responsibilities:
- Develop and maintain automated test scripts using Selenium WebDriver and Pytest to ensure the quality of web applications.
- Implement and enhance the automation testing framework to support scalability, reliability, and efficiency.
- Generate comprehensive test reports using Allure reporting for test result visualization and analysis.
- Conduct API testing using Python Requests, ensuring the functionality and reliability of backend services.
- Utilize Boto3 for automation of AWS infrastructure provisioning, configuration, and management.
- Collaborate with cross-functional teams, including developers, QA engineers, and DevOps engineers, to understand project requirements and deliver high-quality solutions.
- Identify opportunities for process improvement and optimization within the automation testing process.
- Provide technical expertise and guidance to junior team members, fostering a culture of continuous learning and development.
- Stay updated on industry trends and emerging technologies, incorporating them into our automation testing practices as appropriate.
- Participate in code reviews, ensuring adherence to coding standards and best practices.
Requirements:
- Strong programming skills in Python, with proficiency in writing clean, maintainable code.
- Experience with cloud infrastructure management and automation using AWS services and Boto3.
- Solid understanding of software testing principles, methodologies, and best practices.
- Excellent problem-solving skills and attention to detail.
- Ability to work effectively both independently and collaboratively in a fast-paced environment.
- Strong communication and interpersonal skills, with the ability to interact with stakeholders at all levels.
- Passion for technology and a desire to continuously learn and improve.
- Prior experience in Agile development methodologies.
- Experience with performance testing using Locust is considered a plus.
Qualifications:
- Education: Bachelor's degree in Computer Science, Software Engineering, or related field; Master’s degree preferred.
- Experience: 3 - 5 years of proven experience in automation testing using Selenium WebDriver, Pytest, Appium, Allure reporting, Python Requests, and Boto3
Benefits:
- Competitive salary and comprehensive benefits package.
- Opportunity to work with cutting-edge technologies and industry-leading experts.
- Flexible work environment with the option for remote work (hybrid).
- Professional development opportunities and support for continued learning.
- Dynamic and collaborative company culture with opportunities for growth and advancement.
If you are a highly motivated and skilled Automation Engineer looking to take the next step in your career, we encourage you to apply for this exciting opportunity to join our team at Indee. Help us drive innovation and shape the future of technology!
at Indee
About Indee
Indee is among the leading providers of a proprietary platform for secure video distribution and streaming, used by some of the world’s largest media companies, including Netflix, Paramount Pictures, Disney, and over 1100 other companies, big and small. Indee has grown 5x in the last 3 years and is scaling up at a rapid rate.
About the role
We are seeking a highly skilled and experienced Automation Engineer to join our dynamic team. As an Automation Engineer, you will play a key role in designing, implementing, and maintaining our automation testing framework. The primary focus of this role will be on utilizing Selenium, Pytest, Allure reporting, Python Requests, and Boto3 for automation testing and infrastructure management.
Responsibilities:
- Develop and maintain automated test scripts using Selenium WebDriver and Pytest to ensure the quality of web applications.
- Implement and enhance the automation testing framework to support scalability, reliability, and efficiency.
- Generate comprehensive test reports using Allure reporting for test result visualization and analysis.
- Conduct API testing using Python Requests, ensuring the functionality and reliability of backend services.
- Utilize Boto3 for automation of AWS infrastructure provisioning, configuration, and management.
- Collaborate with cross-functional teams, including developers, QA engineers, and DevOps engineers, to understand project requirements and deliver high-quality solutions.
- Identify opportunities for process improvement and optimization within the automation testing process.
- Provide technical expertise and guidance to junior team members, fostering a culture of continuous learning and development.
- Stay updated on industry trends and emerging technologies, incorporating them into our automation testing practices as appropriate.
- Participate in code reviews, ensuring adherence to coding standards and best practices.
Requirements:
- Strong programming skills in Python, with proficiency in writing clean, maintainable code.
- Experience with cloud infrastructure management and automation using AWS services and Boto3.
- Solid understanding of software testing principles, methodologies, and best practices.
- Excellent problem-solving skills and attention to detail.
- Ability to work effectively both independently and collaboratively in a fast-paced environment.
- Strong communication and interpersonal skills, with the ability to interact with stakeholders at all levels.
- Passion for technology and a desire to continuously learn and improve.
- Prior experience in Agile development methodologies.
- Experience with performance testing using Locust is considered a plus.
Qualifications:
- Education: Bachelor's degree in Computer Science, Software Engineering, or related field; Master’s degree preferred.
- Experience: 3 - 5 years of proven experience in automation testing using Selenium WebDriver, Pytest, Appium, Allure reporting, Python Requests, and Boto3
Benefits:
- Competitive salary and comprehensive benefits package.
- Opportunity to work with cutting-edge technologies and industry-leading experts.
- Flexible work environment with the option for remote work (hybrid).
- Professional development opportunities and support for continued learning.
- Dynamic and collaborative company culture with opportunities for growth and advancement.
If you are a highly motivated and skilled Automation Engineer looking to take the next step in your career, we encourage you to apply for this exciting opportunity to join our team at Indee. Help us drive innovation and shape the future of technology!
a leading data & analytics intelligence technology solutions provider to companies that value insights from information as a competitive advantage
What are we looking for?
- Bachelor’s degree in analytics related area (Data Science, Computer Engineering, Computer Science, Information Systems, Engineering, or a related discipline)
- 7+ years of work experience in data science, analytics, engineering, or product management for a diverse range of projects
- Hands on experience with python, deploying ML models
- Hands on experience with time-wise tracking of model performance, and diagnosis of data/model drift
- Familiarity with Dataiku, or other data-science-enabling tools (Sagemaker, etc)
- Demonstrated familiarity with distributed computing frameworks (snowpark, pyspark)
- Experience working with various types of data (structured / unstructured)
- Deep understanding of all data science phases (eg. Data engineering, EDA, Machine Learning, MLOps, Serving)
- Highly self-motivated to deliver both independently and with strong team collaboration
- Ability to creatively take on new challenges and work outside comfort zone
- Strong English communication skills (written & verbal)
Roles & Responsibilities:
- Clearly articulates expectations, capabilities, and action plans; actively listens with others’ frame of reference in mind; appropriately shares information with team; favorably influences people without direct authority
- Clearly articulates scope and deliverables of projects; breaks complex initiatives into detailed component parts and sequences actions appropriately; develops action plans and monitors progress independently; designs success criteria and uses them to track outcomes; drives implementation of recommendations when appropriate, engages with stakeholders throughout to ensure buy-in
- Manages projects with and through others; shares responsibility and credit; develops self and others through teamwork; comfortable providing guidance and sharing expertise with others to help them develop their skills and perform at their best; helps others take appropriate risks; communicates frequently with team members earning respect and trust of the team
- Experience in translating business priorities and vision into product/platform thinking, set clear directives to a group of team members with diverse skillsets, while providing functional & technical guidance and SME support
- Demonstrated experience interfacing with internal and external teams to develop innovative data science solutions
- Strong business analysis, product design, and product management skills
- Ability to work in a collaborative environment - reviewing peers' code, contributing to problem- solving sessions, ability to communicate technical knowledge to a a variety of audience (such as management, brand teams, data engineering teams, etc.)
- Ability to articulate model performance to a non-technical crowd, and ability to select appropriate evaluation criteria to evaluate hidden confounders and biases within a model
- MLOps frameworks and their use in model tracking and deployment, and automating the model serving pipeline
- Work with all sizes of ML model from linear/logistic regression to other sklearn-like models, to deep learning
- Formulate training schema for unbiased model training e.g. K-fold cross-validation, leave-one- out cross-validation) for parameter searching and model tuning
- Ability to work on machine learning work like recommender systems, end-to-end ML lifecycle)
- Ability to manage ML on largely imbalanced training sets (<5% positive rate)
- Ability to articulate model performance to a non-technical crowd, and ability to select appropriate evaluation criteria to evaluate hidden confounders and biases within a model
- MLOps frameworks and their use in model tracking and deployment, and automating the model serving pipeline
Client based at Bangalore location.
Data Scientist - Healthcare AI
Location: Bangalore, India
Experience: 4+ years
Skills Required: Radiology, visual images, text, classical models, LLM multi-modal
Responsibilities:
· LLM Development and Fine-tuning: Fine-tune and adapt large language models (e.g., GPT, Llama2, Mistral) for specific healthcare applications, such as text classification, named entity recognition, and question answering.
· Data Engineering: Collaborate with data engineers to build robust data pipelines for large-scale text datasets used in LLM training and fine-tuning.
· Model Evaluation and Optimization: Develop rigorous experimentation frameworks to assess model performance, identify areas for improvement, and inform model selection.
· Production Deployment: Work closely with MLOps and Data Engineering teams to integrate models into scalable production systems.
· Predictive Model Design: Leverage machine learning/deep learning and LLM methods to design, build, and deploy predictive models in oncology (e.g., survival models).
· Cross-functional Collaboration: Partner with product managers, domain experts, and stakeholders to understand business needs and drive the successful implementation of data science solutions.
· Knowledge Sharing: Mentor junior team members and stay up-to-date with the latest advancements in machine learning and LLMs.
Qualifications:
· Doctoral or master's degree in computer science, Data Science, Artificial Intelligence, or related field.
· 5+ years of hands-on experience in designing, implementing, and deploying machine learning and deep learning models.
· 12+ months of in-depth experience working with LLMs. Proficiency in Python and NLP-focused libraries (e.g., spaCy, NLTK, Transformers, TensorFlow/PyTorch).
· Experience working with cloud-based platforms (AWS, GCP, Azure).
Preferred Qualifications:
o Experience working in the healthcare domain, particularly oncology.
o Publications in relevant scientific journals or conferences.
o Degree from a prestigious university or research institution.
Salary: INR 15 to INR 30 lakhs per annum
Performance Bonus: Up to 10% of the base salary can be added
Location: Bangalore or Pune
Experience: 2-5 years
About AbleCredit:
AbleCredit is on a mission to solve the Credit Gap of emerging economies. In India alone, the Credit Gap is over USD 5T (Trillion!). This is the single largest contributor to poverty, poor genie index and lack of opportunities. Our Vision is to deploy AI reliably, and safely to solve some of the greatest problems of humanity.
Job Description:
This role is ideal for someone with a strong foundation in deep learning and hands-on experience with AI technologies.
- You will be tasked with solving complex, real-world problems using advanced machine learning models in a privacy-sensitive domain, where your contributions will have a direct impact on business-critical processes.
- As a Machine Learning Engineer at AbleCredit, you will collaborate closely with the founding team, who bring decades of industry expertise to the table.
- You’ll work on deploying cutting-edge Generative AI solutions at scale, ensuring they align with strict privacy requirements and optimize business outcomes.
This is an opportunity for experienced engineers to bring creative AI solutions to one of the most challenging and evolving sectors, while making a significant difference to the company’s growth and success.
Requirements:
- Experience: 2-4 years of hands-on experience in applying machine learning and deep learning techniques to solve complex business problems.
- Technical Skills: Proficiency in standard ML tools and languages, including:
- Python: Strong coding ability for building, training, and deploying machine learning models.
- PyTorch (or MLX or Jax): Solid experience in one or more deep learning frameworks for developing and fine-tuning models.
- Shell scripting: Familiarity with Unix/Linux shell scripting for automation and system-level tasks.
- Mathematical Foundation: Good understanding of the mathematical principles behind machine learning and deep learning (linear algebra, calculus, probability, optimization).
- Problem Solving: A passion for solving tough, ambiguous problems using AI, especially in data-sensitive, large-scale environments.
- Privacy & Security: Awareness and understanding of working in privacy-sensitive domains, adhering to best practices in data security and compliance.
- Collaboration: Ability to work closely with cross-functional teams, including engineers, product managers, and business stakeholders, and communicate technical ideas effectively.
- Work Experience: This position is for experienced candidates only.
Additional Information:
- Location: Pune or Bangalore
- Work Environment: Collaborative and entrepreneurial, with close interactions with the founders.
- Growth Opportunities: Exposure to large-scale AI systems, GenAI, and working in a data-driven privacy-sensitive domain.
- Compensation: Competitive salary and ESOPs, based on experience and performance
- Industry Impact: You’ll be at the forefront of applying Generative AI to solve high-impact problems in the finance/credit space, helping shape the future of AI in the business world.
WHO WE ARE:
TIFIN is a fintech platform backed by industry leaders including JP Morgan, Morningstar, Broadridge, Hamilton Lane, Franklin Templeton, Motive Partners and a who’s who of the financial service industry. We are creating engaging wealth experiences to better financial lives through AI and investment intelligence powered personalization. We are working to change the world of wealth in ways that personalization has changed the world of movies, music and more but with the added responsibility of delivering better wealth outcomes.
We use design and behavioral thinking to enable engaging experiences through software and application programming interfaces (APIs). We use investment science and intelligence to build algorithmic engines inside the software and APIs to enable better investor outcomes.
In a world where every individual is unique, we match them to financial advice and investments with a recognition of their distinct needs and goals across our investment marketplace and our advice and planning divisions.
OUR VALUES: Go with your GUT
- Grow at the Edge. We are driven by personal growth. We get out of our comfort zone and keep egos aside to find our genius zones. With self-awareness and integrity we strive to be the best we can possibly be. No excuses.
- Understanding through Listening and Speaking the Truth. We value transparency. We communicate with radical candor, authenticity and precision to create a shared understanding. We challenge, but once a decision is made, commit fully.
- I Win for Teamwin. We believe in staying within our genius zones to succeed and we take full ownership of our work. We inspire each other with our energy and attitude. We fly in formation to win together.
Responsibilities:
- Develop user-facing features such as web apps and landing portals.
- Ensure the feasibility of UI/UX designs and implement them technically.
- Create reusable code and libraries for future use.
- Optimize applications for speed and scalability.
- Contribute to the entire implementation process, including defining improvements based on business needs and architectural enhancements.
- Promote coding, testing, and deployment of best practices through research and demonstration.
- Review frameworks and design principles for suitability in the project context.
- Demonstrate the ability to identify opportunities, lay out rational plans, and see them through to completion.
Requirements:
- Bachelor’s degree in Engineering with 10+ years of software product development experience.
- Proficiency in React, Django, Pandas, GitHub, AWS, and JavaScript, Python
- Strong knowledge of PostgreSQL, MongoDB, and designing REST APIs.
- Experience with scalable interactive web applications.
- Understanding of software design constructs and implementation.
- Familiarity with ORM libraries and Test-Driven Development.
- Exposure to the Finance domain is preferred.
- Knowledge of HTML5, LESS/CSS3, jQuery, and Bootstrap.
- Expertise in JavaScript fundamentals and front-end/back-end technologies.
Nice to Have:
- Strong knowledge of website security and common vulnerabilities.
- Exposure to financial capital markets and instruments.
Compensation and Benefits Package:
- Competitive compensation with a discretionary annual bonus.
- Performance-linked variable compensation.
- Medical insurance.
A note on location. While we have team centers in Boulder, New York City, San Francisco, Charlotte, and Mumbai, this role is based out of Bangalore
TIFIN is an equal-opportunity workplace, and we value diversity in our workforce. All qualified applicants will receive consideration for employment without regard to any discrimination.
Job Description:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
- 7+ years of experience in backend development, with proficiency in Python and/or Golang.
- Strong experience in front-end technologies, particularly React.
- Familiarity with cloud platforms (AWS, GCP, or Azure) and containerization tools like Docker and Kubernetes.
- Knowledge of Apache Spark is highly preferred.
- Solid understanding of database technologies, both relational (e.g., PostgreSQL, MySQL) and NoSQL (e.g., MongoDB, Cassandra).
- Experience with CI/CD pipelines and automated testing frameworks.
- Excellent problem-solving skills and a proactive attitude toward tackling challenges
🚀 We're Hiring: Python AWS Fullstack Developer at InfoGrowth! 🚀
Join InfoGrowth as a Python AWS Fullstack Developer and be a part of our dynamic team driving innovative cloud-based solutions!
Job Role: Python AWS Fullstack Developer
Location: Bangalore & Pune
Mandatory Skills:
- Proficiency in Python programming.
- Hands-on experience with AWS services and migration.
- Experience in developing cloud-based applications and pipelines.
- Familiarity with DynamoDB, OpenSearch, and Terraform (preferred).
- Solid understanding of front-end technologies: ReactJS, JavaScript, TypeScript, HTML, and CSS.
- Experience with Agile methodologies, Git, CI/CD, and Docker.
- Knowledge of Linux (preferred).
Preferred Skills:
- Understanding of ADAS (Advanced Driver Assistance Systems) and automotive technologies.
- AWS Certification is a plus.
Why Join InfoGrowth?
- Work on cutting-edge technology in a fast-paced environment.
- Collaborate with talented professionals passionate about driving change in the automotive and tech industries.
- Opportunities for professional growth and development through exciting projects.
🔗 Apply Now to elevate your career with InfoGrowth and make a difference in the automotive sector!
Job Role: Adaptive Autosar + Bootloader Developer
Mandatory Skills:
- Adaptive Autosar Development
- Bootloader Experience
- C++ Programming
- Hands-on experience with ISO 14229 (UDS Protocol)
- Experience in Flash Bootloader and Software Update topics
- Proficient in C++ and Python programming
- Application development experience in Service-Oriented Architectures
- Hands-on experience with QNX and Linux operating systems
- Familiarity with software development tools like CAN Analyzer, CANoe, and Debugger
- Strong problem-solving skills and the ability to work independently
- Exposure to the ASPICE Process is an advantage
- Excellent analytical and communication skills
Job Responsibilities:
- Engage in tasks related to the integration and development of Flash Bootloader (FBL) features and perform comprehensive testing activities.
- Collaborate continuously with counterparts in Germany to understand requirements and develop FBL features effectively.
- Create test specifications and meticulously document testing results.
Why Join InfoGrowth?
- Become part of an innovative team focused on transforming the automotive industry with cutting-edge technology.
- Work on exciting projects that challenge your skills and promote professional growth.
- Enjoy a collaborative environment that values teamwork and creativity.
🔗 Apply Now to shape the future of automotive technology with InfoGrowth!
🚀 We're Hiring: Test Engineer at InfoGrowth 🚀
Are you an experienced Test Engineer with a passion for automotive embedded systems and automation testing? Join InfoGrowth, a leader in IT and cloud services, and become a part of a dynamic team focused on cutting-edge automotive technology solutions.
Key Responsibilities:
- Design and implement test automation solutions for embedded products in the automotive industry.
- Architect efficient test automation strategies from software to integrated system tests on virtual and physical embedded targets.
- Offer automation testing as a platform for development teams, enabling them to run CI/CD pipeline tests independently.
- Lead in the design and operation of test automation platforms, ensuring top-quality testing efficiency.
- Conduct both manual and automated testing at various system levels, including SW, BaseTech, ADAS, Brake, Suspension, Steering, and Connectivity.
Key Skills:
- Proven expertise in test automation for embedded products in the automotive domain.
- Proficient in Python programming, with experience in Robot Frameworks being a strong plus.
- Hands-on experience with Vector tools such as CANalyzer, CANoe, and CAPL.
- Solid understanding of communication protocols like CAN, LIN, FlexRay, and Ethernet.
- Strong knowledge of HIL (Hardware-in-the-loop) and SIL (Software-in-the-loop) technologies.
- Deep understanding of embedded software testing.
Preferred Experience Areas:
- Testing and implementation of automated/manual testing on Complete Vehicle Electronics System level (SWDL, network management, customer functions).
- Boxcar and HIL testing for critical automotive functions, including ADAS and Steering functionalities.
- Testing and implementing strategies for the Connectivity and system-level automotive technologies.
Job Description
We are looking for a talented Java Developer to work in abroad countries. You will be responsible for developing high-quality software solutions, working on both server-side components and integrations, and ensuring optimal performance and scalability.
Preferred Qualifications
- Experience with microservices architecture.
- Knowledge of cloud platforms (AWS, Azure).
- Familiarity with Agile/Scrum methodologies.
- Understanding of front-end technologies (HTML, CSS, JavaScript) is a plus.
Requirment Details
Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent experience).
Proven experience as a Java Developer or similar role.
Strong knowledge of Java programming language and its frameworks (Spring, Hibernate).
Experience with relational databases (e.g., MySQL, PostgreSQL) and ORM tools.
Familiarity with RESTful APIs and web services.
Understanding of version control systems (e.g., Git).
Solid understanding of object-oriented programming (OOP) principles.
Strong problem-solving skills and attention to detail.
Key Responsibilities
• Lead the automation testing effort of our cloud management platform.
• Create and maintain automation test cases and test suites.
• Work closely with the development team to ensure that the automation tests are integrated into the development process.
• Collaborate with other QA team members to identify and resolve defects.
• Implement automation testing best practices and continuously improve the automation testing framework.
• Develop and maintain automation test scripts using programming languages such as Python.
• Conduct performance testing using tools such as JMeter, Gatling, or Locust.
• Monitor and report on automation testing and performance testing progress and results.
• Ensure that the automation testing and performance testing strategy aligns with overall product quality goals and objectives.
• Manage and mentor a team of automation QA engineers.
Requirements
• Bachelor's degree in Computer Science or a related field.
• Minimum of 8+ years of experience in automation testing and performance testing.
• Experience in leading and managing automation testing teams.
• Strong experience with automation testing frameworks including Robot Framework.
• Strong experience with programming languages, including Python.
• Strong understanding of software development lifecycle and agile methodologies.
• Experience with testing cloud-based applications.
• Good understanding of Cloud services & ecosystem, specifically AWS.
• Experience with performance testing tools such as JMeter, Gatling, or Locust.
• Excellent analytical and problem-solving skills.
• Excellent written and verbal communication skills.
• Ability to work independently and in a team environment.
• Passionate about automation testing and performance testing.
Role: Data Analyst (Apprentice - 1 Year contract)
Location: Bangalore, Hybrid Model
Job Summary: Join our team as an Apprentice for one year and take the next step in your career while supporting FOX’s unique corporate services and Business Intelligence (BI) platforms. This role offers a fantastic opportunity to leverage your communication skills, technical expertise, analytical and problem-solving competencies, and customer-focused experience.
Key Responsibilities:
· Assist in analyzing and solving data-related challenges.
· Support the development and maintenance of corporate service and BI platforms.
· Collaborate with cross-functional teams to enhance user experience and operational efficiency.
· Participate in training sessions to further develop technical and analytical skills.
· Conduct research and analysis to identify trends and insights in data.
· Prepare reports and presentations to communicate findings to stakeholders.
· Engage with employees to understand their needs and provide support.
· Contribute insights and suggestions during team meetings to drive continuous improvement.
Qualifications:
· Bachelor’s degree in Engineering (2024 pass-out).
· Strong analytical and technical skills with attention to detail.
· Excellent communication skills, both verbal and written.
· Ability to work collaboratively in a team-oriented environment.
· Proactive attitude and a strong willingness to learn.
· Familiarity with data analysis tools and software (e.g., Excel, SQL, Tableau) is a plus.
· Basic understanding of programming languages (e.g., Python, R) is an advantage.
Additional Information:
- This position offers a hybrid work model, allowing flexibility between remote and in-office work.
- Opportunities for professional development and skill enhancement through buddy and mentorship.
- Exposure to real-world projects and the chance to contribute to impactful solutions.
- A supportive and inclusive team environment that values diverse perspectives.
at Juntrax Solutions
Software Development QA Engineer (Automation and Manual)
Juntrax Solutions is a SF/Bengaluru-based company developing and innovating SaaS-based products. Currently working on developing a brand new product, so great opportunity to be part of product lifecycle development and core team.
Roles and Responsibilities
- Understand and update project requirements documents as necessary
- create & execute test cases
- Design, develop and execute automated test scripts for regression testing.
- Test, report, and manage defect pipeline and communicate with the team.
- Collaborate with the design and development team to provide input as it relates to the requirements
- Setup test automation frameworks for multiple application platforms like Web and Mobile. Manage and execute test scripts on these frameworks.
- Investigate customer problems referred by the technical support team.
- Able to build different test scenarios and validate acceptance test cases.
- Handle technical communications with stakeholders and team
- Bachelor's in Computer Science, Engineering, and minimum 1-2 years of industry experience as a Software Development Test Engineer.
- Must have excellent verbal and written communication skills.
- Testing materials like test cases, plans, test strategies, bug reports created should be easy to read and comprehend.
- Must efficiently manage multiple tasks, have high productivity, time management skills
- You should be able to upgrade your technical skills with the changing technologies.
- Able to work independently and take ownership of the product testing role
- should have a product mindset and spirit to work in an early-stage start-up.
- Should have previously set up an automation framework.
Other
- Must be able to work in Bangalore.
- Salary based on qualifications and experience.
- Excellent growth opportunity
at Wissen Technology
Job Requirements:
Intermediate Linux Knowledge
- Experience with shell scripting
- Familiarity with Linux commands such as grep, awk, sed
- Required
Advanced Python Scripting Knowledge
- Strong expertise in Python
- Required
Ruby
- Nice to have
Basic Knowledge of Network Protocols
- Understanding of TCP/UDP, Multicast/Unicast
- Required
Packet Captures
- Experience with tools like Wireshark, tcpdump, tshark
- Nice to have
High-Performance Messaging Libraries
- Familiarity with tools like Tibco, 29West, LBM, Aeron
- Nice to have
• Proven experience as a Linux Platform Engineer, with a focus on Red Hat Enterprise Linux (RHEL) preferred.
• Proficient in Python and Ansible scripting.
• Demonstrated expertise in Linux OS configuration management.
• Experience with RHEL LEAPP upgrades
• Experience with Data Visualization tools, such as Splunk and Grafana
• Experience with Service Now
• Excellent problem-solving skills and attention to detail
• Strong communication and collaboration skills for working in a team-oriented environment.
• Red Hat Certified Engineer (RHCE) certification for RHEL8 or equivalent
• Willingness to adapt to evolving technologies and embrace continuous learning.
• Knowledge of financial services, including regulations and security standards, is preferred.
• Stay current with industry trends and best practices in Linux, financial services, and cybersecurity.
- Responsible for designing, storing, processing, and maintaining of large-scale data and related infrastructure.
- Can drive multiple projects both from operational and technical standpoint.
- Ideate and build PoV or PoC for new product that can help drive more business.
- Responsible for defining, designing, and implementing data engineering best practices, strategies, and solutions.
- Is an Architect who can guide the customers, team, and overall organization on tools, technologies, and best practices around data engineering.
- Lead architecture discussions, align with business needs, security, and best practices.
- Has strong conceptual understanding of Data Warehousing and ETL, Data Governance and Security, Cloud Computing, and Batch & Real Time data processing
- Has strong execution knowledge of Data Modeling, Databases in general (SQL and NoSQL), software development lifecycle and practices, unit testing, functional programming, etc.
- Understanding of Medallion architecture pattern
- Has worked on at least one cloud platform.
- Has worked as data architect and executed multiple end-end data engineering project.
- Has extensive knowledge of different data architecture designs and data modelling concepts.
- Manages conversation with the client stakeholders to understand the requirement and translate it into technical outcomes.
Required Tech Stack
- Strong proficiency in SQL
- Experience working on any of the three major cloud platforms i.e., AWS/Azure/GCP
- Working knowledge of an ETL and/or orchestration tools like IICS, Talend, Matillion, Airflow, Azure Data Factory, AWS Glue, GCP Composer, etc.
- Working knowledge of one or more OLTP databases (Postgres, MySQL, SQL Server, etc.)
- Working knowledge of one or more Data Warehouse like Snowflake, Redshift, Azure Synapse, Hive, Big Query, etc.
- Proficient in at least one programming language used in data engineering, such as Python (or Scala/Rust/Java)
- Has strong execution knowledge of Data Modeling (star schema, snowflake schema, fact vs dimension tables)
- Proficient in Spark and related applications like Databricks, GCP DataProc, AWS Glue, EMR, etc.
- Has worked on Kafka and real-time streaming.
- Has strong execution knowledge of data architecture design patterns (lambda vs kappa architecture, data harmonization, customer data platforms, etc.)
- Has worked on code and SQL query optimization.
- Strong knowledge of version control systems like Git to manage source code repositories and designing CI/CD pipelines for continuous delivery.
- Has worked on data and networking security (RBAC, secret management, key vaults, vnets, subnets, certificates)
Job Description: AI/ML Engineer
Location: Bangalore (On-site)
Experience: 2+ years of relevant experience
About the Role:
We are seeking a skilled and passionate AI/ML Engineer to join our team in Bangalore. The ideal candidate will have over two years of experience in developing, deploying, and maintaining AI and machine learning models. As an AI/ML Engineer, you will work closely with our data science team to build innovative solutions and deploy them in a production environmen
Key Responsibilities:
- Develop, implement, and optimize machine learning models.
- Perform data manipulation, exploration, and analysis to derive actionable insights.
- Use advanced computer vision techniques, including YOLO and other state-of-the-art methods, for image processing and analysis.
- Collaborate with software developers and data scientists to integrate AI/ML solutions into the company's applications and products.
- Design, test, and deploy scalable machine learning solutions using TensorFlow, OpenCV, and other related technologies.
- Ensure the efficient storage and retrieval of data using SQL and data manipulation libraries such as pandas and NumPy.
- Contribute to the development of backend services using Flask or Django for deploying AI models.
- Manage code using Git and containerize applications using Docker when necessary.
- Stay updated with the latest advancements in AI/ML and integrate them into existing projects.
Required Skills:
- Proficiency in Python and its associated libraries (NumPy, pandas).
- Hands-on experience with TensorFlow for building and training machine learning models.
- Strong knowledge of linear algebra and data augmentation techniques.
- Experience with computer vision libraries like OpenCV and frameworks like YOLO.
- Proficiency in SQL for database management and data extraction.
- Experience with Flask for backend development.
- Familiarity with version control using Git.
Optional Skills:
- Experience with PyTorch, Scikit-learn, and Docker.
- Familiarity with Django for web development.
- Knowledge of GPU programming using CuPy and CUDA.
- Understanding of parallel processing techniques.
Qualifications:
- Bachelor's degree in Computer Science, Engineering, or a related field.
- Demonstrated experience in AI/ML, with a portfolio of past projects.
- Strong analytical and problem-solving skills.
- Excellent communication and teamwork skills.
Why Join Us?
- Opportunity to work on cutting-edge AI/ML projects.
- Collaborative and dynamic work environment.
- Competitive salary and benefits.
- Professional growth and development opportunities.
If you're excited about using AI/ML to solve real-world problems and have a strong technical background, we'd love to hear from you!
Apply now to join our growing team and make a significant impact!
Client based at Bangalore location.
Data Science:
• Python expert level, Analytical, Different models works, Basic concepts, CPG(Domain).
• Statistical Models & Hypothesis , Testing
• Machine Learning Important
• Business Understanding, visualization in Python.
• Classification, clustering and regression
•
Mandatory Skills
• Data Science, Python, Machine Learning, Statistical Models, Classification, clustering and regression
Looking for Python with React.
Python frameworks like Django or Flask.
Develop RESTful APIs or GraphQL endpoints
About us
Fisdom is one of the largest wealthtech platforms that allows investors to manage their wealth in an intuitive and seamless manner. Fisdom has a suite of products and services that takes care of every wealth requirement that an individual would have. This includes Mutual Funds, Stock Broking, Private Wealth, Tax Filing, and Pension funds
Fisdom has a B2C app and also an award-winning B2B2C distribution model where we have partnered with 15 of the largest banks in India such as Indian Bank and UCO Bank to provide wealth products to their customers. In our bank-led distribution model, our SDKs are integrated seamlessly into the bank’s mobile banking and internet banking application. Fisdom is the first wealthtech company in the country to launch a stock broking product for customers of a PSU bank.
The company is breaking down barriers by enabling access to wealth management to underserved customers. All our partners combined have a combined user base of more than 50 crore customers. This makes us uniquely placed to disrupt the wealthtech space which we believe is in its infancy in India in terms of wider adoption.
Where are we now and where are we heading towards
Founded by veteran VC-turned entrepreneur Subramanya SV(Subu) and former investment
banker Anand Dalmia, Fisdom is backed by PayU (Naspers), Quona Capital, and Saama Capital; with $37million of total funds raised so far. Fisdom is known for its revenue and profitability focussed approach towards sustainable business.
Fisdom is the No.1 company in India in the B2B2C wealthtech space and one of the most admired companies in the fintech ecosystem for our business model. We look forward to growing the leadership position by staying focussed on product and technology innovation.
Our technology team
Today we are a 60-member strong technology team. Everyone in the team is a hands-on engineer, including the team leads and managers. We take pride in being product engineers and we believe engineers are fundamentally problem solvers first. Our culture binds us together as one cohesive unit. We stress on engineering excellence and strive to become a high talent density team. Some values that we preach and practice include:
- Individual ownership and collective responsibility
- Focus on continuous learning and constant improvement in every aspect of engineering and product
- Cheer for openness, inclusivity and transparency
- Merit-based growth
What we are looking for
- Are open to work in a flat, non-hierarchical setup where daily focus is only shipping features not reporting to managers
- Experience designing highly interactive web applications with performance, scalability, accessibility, usability, design, and security in mind.
- Experience with distributed (multi-tiered) systems, algorithms, and relational and no-sql databases.
- Ability to break-down larger/fuzzier problems into smaller ones in the scope of the product
- Experience with architectural trade-offs, applying synchronous and asynchronous design patterns, and delivering with speed while maintaining quality.
- Raise the bar on sustainable engineering by improving best practices, producing best in class of code, documentation, testing and monitoring.
- Contributes in code and actively takes part in code reviews.
- Working with the Product Owner/managers to clearly define the scope of multiple sprints. Lead/guide the team through sprint(s) scoping, resource allocation and commitment - the execution plan.
- Drives feature development end-to-end. Active partner with product, design, and peer engineering leads and managers.
- Familiarity with build, release, deployment tools such as Ant, Maven, and Gradle, Docker, Kubernetes, Jenkins etc.
- Effective at influencing a culture of engineering craftsmanship and excellence
- Helps the team make the right choices. Drives adoption of engineering best practices and development processes within their team.
- Understanding security and compliance.
- User authentication and authorisation between multiple systems, servers, and environments.
- Based on your experience, you may lead a small team of Engineers.
If you don't have all of these, that's ok. But be excited about learning the few you don't know.
Skills
Microservices, Engineering Management, Quality management, Technical Architecture, technical lead. Hands-on programming experience in one of languages: Python, Golang.
Additional perks
- Access to large repositories of online courses through Myacademy (includes Udemy, Coursera, Harvard ManageMentor, Udacity and many more). We strongly encourage learning something outside of work as a habit.
- Career planning support/counseling / coaching support. Both internal and external coaches.
- Relocation policy
You will not be a good fit for this role if
- you have experience of only working with services companies or have spent a major part of your time there
- you are not open to shifting to new programming language or stack but exploring a position aligned to your current technical experience
- you are not very hands-on, seek direction constantly and need continuous supervision from a manager to finish tasks
- you like to working alone and mentoring junior engineers does not interest you
- you are looking to work in very large teams
Why join us and where?
We're a small but high performing engineering team. We recognize that the work we do impacts the lives of hundreds and thousands of people. Your work will contribute significantly to our mission. We pay competitive compensation and performance bonuses. We provide a high energy work environment and you are encouraged to play around new technology and self-learning. You will be based out of Bangalore
About us
Fisdom is one of the largest wealthtech platforms that allows investors to manage their wealth in an intuitive and seamless manner. Fisdom has a suite of products and services that takes care of every wealth requirement that an individual would have. This includes Mutual Funds, Stock Broking, Private Wealth, Tax Filing, and Pension funds
Fisdom has a B2C app and also an award-winning B2B2C distribution model where we have partnered with 15 of the largest banks in India such as Indian Bank and UCO Bank to provide wealth products to their customers. In our bank-led distribution model, our SDKs are integrated seamlessly into the bank’s mobile banking and internet banking application. Fisdom is the first wealthtech company in the country to launch a stock broking product for customers of a PSU bank.
The company is breaking down barriers by enabling access to wealth management to underserved customers. All our partners combined have a combined user base of more than 50 crore customers. This makes us uniquely placed to disrupt the wealthtech space which we believe is in its infancy in India in terms of wider adoption.
Where are we now and where are we heading towards
Founded by veteran VC-turned entrepreneur Subramanya SV(Subu) and former investment
banker Anand Dalmia, Fisdom is backed by PayU (Naspers), Quona Capital, and Saama Capital; with $37million of total funds raised so far. Fisdom is known for its revenue and profitability focussed approach towards sustainable business.
Fisdom is the No.1 company in India in the B2B2C wealthtech space and one of the most admired companies in the fintech ecosystem for our business model. We look forward to growing the leadership position by staying focussed on product and technology innovation.
Our technology team
Today we are a 60-member strong technology team. Everyone in the team is a hands-on engineer, including the team leads and managers. We take pride in being product engineers and we believe engineers are fundamentally problem solvers first. Our culture binds us together as one cohesive unit. We stress on engineering excellence and strive to become a high talent density team. Some values that we preach and practice include:
- Individual ownership and collective responsibility
- Focus on continuous learning and constant improvement in every aspect of engineering and product
- Cheer for openness, inclusivity and transparency.
- Merit-based growth
Key Responsibilities
- Write code, build prototypes and resolve issues.
- Write and review unit test cases.
- Review code & designs for both oneself and team members
- Defining and building microservices
- Building systems with positive business outcome
- Tracking module health, usage & behaviour tracking.
Key Skills
- An engineer with at least 1-3 years of working experience in web services, preferably in Python
- Must have a penchant for good API design.
- Must be a stickler for good, clear and secure coding.
- Must have built and released APIs in production.
- Experience in working with RDBMS & NoSQL databases.
- Working knowledge of GCP, AWS, Azure or any other cloud provider.
- Aggressive problem diagnosis & creative problem solving skills.
- Communication skills, to speak to developers across the world
Why join us and where?
We're a small but high performing engineering team. We recognize that the work we do impacts the lives of hundreds and thousands of people. Your work will contribute significantly to our mission. We pay competitive compensation and performance bonuses. We provide a high energy work environment and you are encouraged to play around new technology and self-learning. You will be based out of Bangalore.
Job Description: Python Backend Developer
Experience: 7-12 years
Job Type: Full-time
Job Overview:
Wissen Technology is looking for a highly experienced Python Backend Developer with 7-12 years of experience to join our team. The ideal candidate will have deep expertise in backend development using Python, with a strong focus on Django and Flask frameworks.
Key Responsibilities:
- Develop and maintain robust backend services and APIs using Python, Django, and Flask.
- Design scalable and efficient database schemas, integrating with both relational and NoSQL databases.
- Collaborate with front-end developers and other team members to establish objectives and design functional, cohesive code.
- Optimize applications for maximum speed and scalability.
- Ensure security and data protection protocols are implemented effectively.
- Troubleshoot and debug applications to ensure a seamless user experience.
- Participate in code reviews, testing, and quality assurance processes.
Required Skills:
Python: Extensive experience in backend development using Python.
Django & Flask: Proficiency in Django and Flask frameworks.
Database Management: Strong knowledge of databases such as PostgreSQL, MySQL, and MongoDB.
API Development: Expertise in building and maintaining RESTful APIs.
Security: Understanding of security best practices and data protection measures.
Version Control: Experience with Git for collaboration and version control.
Problem-Solving: Strong analytical skills with a focus on writing clean, efficient code.
Communication: Excellent communication and teamwork skills.
Preferred Qualifications:
- Experience with cloud services like AWS, Azure, or GCP.
- Familiarity with Docker and containerization.
- Knowledge of CI/CD practices.
Why Join Wissen Technology?
- Opportunity to work on innovative projects with a cutting-edge technology stack.
- Competitive compensation and benefits package.
- A supportive environment that fosters professional growth and learning.
Company: Optimum Solutions
About the company: Optimum solutions is a leader in a sheet metal industry, provides sheet metal solutions to sheet metal fabricators with a proven track record of reliable product delivery. Starting from tools through software, machines, we are one stop shop for all your technology needs.
Role Overview:
- Creating and managing database schemas that represent and support business processes, Hands-on experience in any SQL queries and Database server wrt managing deployment.
- Implementing automated testing platforms, unit tests, and CICD Pipeline
- Proficient understanding of code versioning tools, such as GitHub, Bitbucket, ADO
- Understanding of container platform, such as Docker
Job Description
- We are looking for a good Python Developer with Knowledge of Machine learning and deep learning framework.
- Your primary focus will be working the Product and Usecase delivery team to do various prompting for different Gen-AI use cases
- You will be responsible for prompting and building use case Pipelines
- Perform the Evaluation of all the Gen-AI features and Usecase pipeline
Position: AI ML Engineer
Location: Chennai (Preference) and Bangalore
Minimum Qualification: Bachelor's degree in computer science, Software Engineering, Data Science, or a related field.
Experience: 4-6 years
CTC: 16.5 - 17 LPA
Employment Type: Full Time
Key Responsibilities:
- Take care of entire prompt life cycle like prompt design, prompt template creation, prompt tuning/optimization for various Gen-AI base models
- Design and develop prompts suiting project needs
- Lead and manage team of prompt engineers
- Stakeholder management across business and domains as required for the projects
- Evaluating base models and benchmarking performance
- Implement prompt gaurdrails to prevent attacks like prompt injection, jail braking and prompt leaking
- Develop, deploy and maintain auto prompt solutions
- Design and implement minimum design standards for every use case involving prompt engineering
Skills and Qualifications
- Strong proficiency with Python, DJANGO framework and REGEX
- Good understanding of Machine learning framework Pytorch and Tensorflow
- Knowledge of Generative AI and RAG Pipeline
- Good in microservice design pattern and developing scalable application.
- Ability to build and consume REST API
- Fine tune and perform code optimization for better performance.
- Strong understanding on OOP and design thinking
- Understanding the nature of asynchronous programming and its quirks and workarounds
- Good understanding of server-side templating languages
- Understanding accessibility and security compliance, user authentication and authorization between multiple systems, servers, and environments
- Integration of APIs, multiple data sources and databases into one system
- Good knowledge in API Gateways and proxies, such as WSO2, KONG, nginx, Apache HTTP Server.
- Understanding fundamental design principles behind a scalable and distributed application
- Good working knowledge on Microservices architecture, behaviour, dependencies, scalability etc.
- Experience in deploying on Cloud platform like Azure or AWS
- Familiar and working experience with DevOps tools like Azure DEVOPS, Ansible, Jenkins, Terraform
Zethic Technologies is one of the leading creative tech studio based in Bangalore. Zethic’s team members have years of experience in software development. Zethic specializes in Custom software development, Mobile Applications development, chatbot development, web application development, UI/UX designing, and consulting.
Your Responsibilities:
- Coordinating with the software development team in addressing technical doubts
- Reviewing ongoing operations and rectifying any issues
- Work closely with the developers to determine and implement appropriate design and code changes, and make relevant recommendations to the team
- Very good leadership skills with the ability to lead multiple development teams
- Ability to learn new technologies rapidly and share knowledge with other team members.
- Provide technical leadership to programmers working on the development project team.
- Must have knowledge of stages in SDLC
- Should be informed on designing the overall architecture of the web application. Should have experience working with graphic designers and converting designs to visual elements.
- Highly experienced with back-end programming languages (PHP, Python, JavaScript). Proficient experience using advanced JavaScript libraries and frameworks such as ReactJS.
- Development experience for both mobile and desktop. Knowledge of code versioning tools (GIT)
- Mentors junior web developers on technical issues and modern web development best practices and solutions
- Developing reusable code for continued use
Why join us?
We’re multiplying and the sky’s the limit
Work with a talented team you’ll learn a lot from them
We care about delivering value to our excellent customers
We are flexible in our opinions and always open to new ideas
We know it takes people with different ideas, strengths, backgrounds, cultures, beliefs, and interests to make our Company succeed.
We celebrate and respect all our employees equally.
Zethic ensures equal employment opportunity without discrimination or harassment based on race, color, religion, sex, gender identity, age, disability, national origin, marital status, genetic information, veteran status, or any other characteristic protected by law.
Job Purpose and Impact
The DevOps Engineer is a key position to strengthen the security automation capabilities which have been identified as a critical area for growth and specialization within Global IT’s scope. As part of the Cyber Intelligence Operation’s DevOps Team, you will be helping shape our automation efforts by building, maintaining and supporting our security infrastructure.
Key Accountabilities
- Collaborate with internal and external partners to understand and evaluate business requirements.
- Implement modern engineering practices to ensure product quality.
- Provide designs, prototypes and implementations incorporating software engineering best practices, tools and monitoring according to industry standards.
- Write well-designed, testable and efficient code using full-stack engineering capability.
- Integrate software components into a fully functional software system.
- Independently solve moderately complex issues with minimal supervision, while escalating more complex issues to appropriate staff.
- Proficiency in at least one configuration management or orchestration tool, such as Ansible.
- Experience with cloud monitoring and logging services.
Qualifications
Minimum Qualifications
- Bachelor's degree in a related field or equivalent exp
- Knowledge of public cloud services & application programming interfaces
- Working exp with continuous integration and delivery practices
Preferred Qualifications
- 3-5 years of relevant exp whether in IT, IS, or software development
- Exp in:
- Code repositories such as Git
- Scripting languages (Python & PowerShell)
- Using Windows, Linux, Unix, and mobile platforms within cloud services such as AWS
- Cloud infrastructure as a service (IaaS) / platform as a service (PaaS), microservices, Docker containers, Kubernetes, Terraform, Jenkins
- Databases such as Postgres, SQL, Elastic
at Cargill Business Services
Job Purpose and Impact:
The Sr. Generative AI Engineer will architect, design and develop new and existing GenAI solutions for the organization. As a Generative AI Engineer, you will be responsible for developing and implementing products using cutting-edge generative AI and RAG to solve complex problems and drive innovation across our organization. You will work closely with data scientists, software engineers, and product managers to design, build, and deploy AI-powered solutions that enhance our products and services in Cargill. You will bring order to ambiguous scenarios and apply in depth and broad knowledge of architectural, engineering and security practices to ensure your solutions are scalable, resilient and robust and will share knowledge on modern practices and technologies to the shared engineering community.
Key Accountabilities:
• Apply software and AI engineering patterns and principles to design, develop, test, integrate, maintain and troubleshoot complex and varied Generative AI software solutions and incorporate security practices in newly developed and maintained applications.
• Collaborate with cross-functional teams to define AI project requirements and objectives, ensuring alignment with overall business goals.
• Conduct research to stay up-to-date with the latest advancements in generative AI, machine learning, and deep learning techniques and identify opportunities to integrate them into our products and services, optimizing existing generative AI models and RAG for improved performance, scalability, and efficiency, developing and maintaining pipelines and RAG solutions including data preprocessing, prompt engineering, benchmarking and fine-tuning.
• Develop clear and concise documentation, including technical specifications, user guides and presentations, to communicate complex AI concepts to both technical and non-technical stakeholders.
• Participate in the engineering community by maintaining and sharing relevant technical approaches and modern skills in AI.
• Contribute to the establishment of best practices and standards for generative AI development within the organization.
• Independently handle complex issues with minimal supervision, while escalating only the most complex issues to appropriate staff.
Minimum Qualifications:
• Bachelor’s degree in a related field or equivalent experience
• Minimum of five years of related work experience
• You are proficient in Python and have experience with machine learning libraries and frameworks
• Have deep understanding of industry leading Foundation Model capabilities and its application.
• You are familiar with cloud-based Generative AI platforms and services
• Full stack software engineering experience to build products using Foundation Models
• Confirmed experience architecting applications, databases, services or integrations.
Job Purpose and Impact:
The Enterprise Resource Planning (ERP) Engineering Supervisor will lead a small engineering team across technology and business capabilities to build and enhance modern business applications for ERP systems in the company. In this role, you will guide team in product development, architecture and technology adherence to ensure delivered solutions are secure and scalable. You will also lead team development and cross team relationships and delivery to advance the company's engineering delivery.
Key Accountabilities:
- Lead a team of engineering professionals that design, develop, deploy and enhance the new and existing software solutions.
- Provide direction to the team to build highly scalable and resilient software products and platforms to support business needs.
- Provide input and guidance to the delivery team across technology and business capabilities to accomplish team deliverables.
- Provide support to software engineers dedicated to products in other portfolios within ERP teams.
- Partner with the engineering community to coach engineers, share relevant technical approaches, identify new trends, modern skills and present code methodologies.
Qualifications:
MINIMUM QUALIFICATIONS:
- Bachelor’s degree in a related field or equivalent experience
- Minimum of four years of related work experience
PREFERRED QUALIFICATIONS:
- Confirmed hands on technical experience with technologies including cloud, software development and continuous integration and continuous delivery
- 2 years of supervisory experience
- Experience leading engineers in the area of ERP basis, Code Development (ABAP, HTML 5, Python, Java, etc..), or Design Thinking.