50+ Python Jobs in Pune | Python Job openings in Pune
Apply to 50+ Python Jobs in Pune on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.
Overall 4+ years of IT experience
Minimum 4 years of hands-on experience in Python and backend development with exposure to cloud technologies
Key Skills & Responsibilities:
Python Proficiency:
Strong hands-on experience in Python with knowledge of frameworks like Django or Flask.
Ability to write clean, efficient, and maintainable code.
Cloud Exposure:
Familiarity with cloud platforms (AWS, Azure, or GCP).
Hands-on experience with basic services like virtual machines, serverless functions (e.g., AWS Lambda), or storage solutions.
Backend Development:
Proficiency in developing and consuming RESTful APIs.
Knowledge of additional communication protocols (e.g., GraphQL, WebSockets) is a plus.
Database Management:
Experience with relational databases (e.g., PostgreSQL, MySQL).
Basic knowledge of NoSQL databases like MongoDB or DynamoDB.
System Design & Problem-Solving:
Ability to understand and implement scalable backend solutions.
Basic exposure to distributed systems or event-driven architectures (e.g., Kafka, RabbitMQ) is a bonus.
Collaboration & Communication:
Good communication skills to work effectively in a team.
Willingness to learn and adapt to new technologies.
Security & Best Practices:
Awareness of secure coding practices and API security (e.g., OAuth, JWT).
Good-to-Have Skills:
Basic understanding of full-stack development, including front-end technologies (React, Angular, Vue.js).
Familiarity with containerization (Docker) and CI/CD pipelines.
Knowledge of caching strategies using Redis or Memcached.
About the Role:
We are seeking a skilled Python Backend Developer to join our dynamic team. This role focuses on designing, building, and maintaining efficient, reusable, and reliable code that supports both monolithic and microservices architectures. The ideal candidate will have a strong understanding of backend frameworks and architectures, proficiency in asynchronous programming, and familiarity with deployment processes. Experience with AI model deployment is a plus.
Overall 5+ years of IT experience with minimum of 5+ Yrs of experience on Python and in Opensource web framework (Django) with AWS Experience.
Key Responsibilities:
- Develop, optimize, and maintain backend systems using Python, Pyspark, and FastAPI.
- Design and implement scalable architectures, including both monolithic and microservices.
-3+ Years of working experience in AWS (Lambda, Serverless, Step Function and EC2)
-Deep Knowledge on Python Flask/Django Framework
-Good understanding of REST API’s
-Sound Knowledge on Database
-Excellent problem-solving and analytical skills
-Leadership Skills, Good Communication Skills, interested to learn modern technologies
- Apply design patterns (MVC, Singleton, Observer, Factory) to solve complex problems effectively.
- Work with web servers (Nginx, Apache) and deploy web applications and services.
- Create and manage RESTful APIs; familiarity with GraphQL is a plus.
- Use asynchronous programming techniques (ASGI, WSGI, async/await) to enhance performance.
- Integrate background job processing with Celery and RabbitMQ, and manage caching mechanisms using Redis and Memcached.
- (Optional) Develop containerized applications using Docker and orchestrate deployments with Kubernetes.
Required Skills:
- Languages & Frameworks:Python, Django, AWS
- Backend Architecture & Design:Strong knowledge of monolithic and microservices architectures, design patterns, and asynchronous programming.
- Web Servers & Deployment:Proficient in Nginx and Apache, and experience in RESTful API design and development. GraphQL experience is a plus.
-Background Jobs & Task Queues: Proficiency in Celery and RabbitMQ, with experience in caching (Redis, Memcached).
- Additional Qualifications: Knowledge of Docker and Kubernetes (optional), with any exposure to AI model deployment considered a bonus.
Qualifications:
- Bachelor’s degree in Computer Science, Engineering, or a related field.
- 5+ years of experience in backend development using Python and Django and AWS.
- Demonstrated ability to design and implement scalable and robust architectures.
- Strong problem-solving skills, attention to detail, and a collaborative mindset.
Preferred:
- Experience with Docker/Kubernetes for containerization and orchestration.
- Exposure to AI model deployment processes.
Sarvaha would like to welcome talented Software Development Engineer in Test (SDET) with minimum 5 years of experience to join our team. As an SDET, you will champion the quality of the product and will design, develop, and maintain modular, extensible, and reusable test cases/scripts. This is a hands-on role which requires you to work with automation test developers and application developers to enhance the quality of the products and development practices. Please visit our website at http://www.sarvaha.com to know more about us.
Key Responsibilities
- Understand requirements through specification or exploratory testing, estimate QA efforts, design test strategy, develop optimal test cases, maintain RTM
- Design, develop & maintain a scalable test automation framework
- Build interfaces to seamlessly integrate testing with development environments.
- Create & manage test setups that prioritize scalability, remote accessibility and reliability.
- Automate test scripts, create and execute relevant test suites, analyze test results and enhance existing or build newer scripts for coverage. Communicate with stakeholders for requirements, troubleshooting etc; provide visibility into the works by sharing relevant reports and metrics
- Stay up-to-date with industry best practices in testing methodologies and technologies to advise QA and integration teams.
Skills Required
- Bachelor's or Master's degree in Computer Science, Information Technology, or a related field (Software Engineering preferred).
- Minimum 5+ years of experience in testing enterprise-grade/highly scalable, distributed applications, products, and services.
- Expertise in manual and Automation testing with excellent understanding of test methodologies and test design techniques, test life cycle.
- Strong programming skills in Typescript and Python, with experience using Playwright for building hybrid/BDD frameworks for Website and API automation
- Very good problem-solving and analytical skills.
- Experience in databases, both SQL and No-SQL.
- Practical experience in setting up CI/CD pipelines (ideally with Jenkins).
- Exposure to Docker, Kubernetes and EKS is highly desired.
- C# experience is an added advantage.
- A continuous learning mindset and a passion for exploring new technologies.
- Excellent communication, collaboration, quick learning of needed language/scripting and influencing skills.
Position Benefits
- Competitive salary and excellent growth opportunities within a dynamic team.
- Positive and collaborative work environment with the opportunity to learn from talented colleagues.
- Highly challenging and rewarding software development problems to solve.
- Hybrid work model with established remote work options.
Who are we a.k.a “About Cambridge Wealth” :
We are an early stage Fintech Startup - working on exciting Fintech Products for some of the Top 5 Global Banks and building our own. If you are looking for a place where you can make a mark and not just be a cog in the wheel, Bakerstreet Fintech might be the place for you. We have a flat, ownership-oriented culture, and deliver world class quality. You will be working with a founding team that has delivered over 26 industry leading product experiences and won the Webby awards for Digital Strategy. In short, a bleeding edge team.
What are we looking for a.k.a “The JD” :
We are looking for a motivated and energetic Flutter Intern who will be running and designing product application features across various cross platform devices. Just like Lego boxes that fit on top of one another, we are looking out for someone who has experience using Flutter widgets that can be plugged together, customised and deployed anywhere.
What will you be doing at CW a.k.a “Your Responsibilities :
- Create multi-platform apps for iOS / Android using Flutter Development Framework.
- Participation in the process of analysis, designing, implementation and testing of new apps.
- Apply industry standards during the development process to ensure high quality.
- Translate designs and wireframes into high quality code.
- Ensure the best possible performance, quality, and responsiveness of the application.
- Help maintain code quality, organisation, and automatisation.
- Work on bug fixing and improving application performance
What should our ideal candidate have a.k.a “Your Requirements”:
- Knowledge of mobile app development.
- Worked at any stage startup or have developed projects of their own ideas.
- Good knowledge of Flutter and interest in developing mobile applications.
- Available for full time (in-office) internship.
Not sure whether you should apply? Here's a quick checklist to make things easier. You are someone who:
- You are ready to be a part of a Zero To One Journey which implies that you shall be involved in building fintech products and processes from the ground up.
- You are comfortable to work in an unstructured environment with a small team where you decide what your day looks like and take initiative to take up the right piece of work, own it and work with the founding team on it.
- This is not an environment where someone will be checking up on you every few hours. It is up to you to schedule check-ins whenever you find the need to, else we assume you are progressing well with your tasks. You will be expected to find solutions to problems and suggest improvements
- You want complete ownership for your role & be able to drive it the way you think is right. You are looking to stick around for the long term and grow with the company.
- You have the ability to be a self-starter and take ownership of deliverables to develop a consensus with the team on approach and methods and deliver to them.
Speed-track your application process by completing the 40 min test at the link below:
https://app.testgorilla.com/s/itrlc3m2
On successfully clearing the above, there is 20-30 min video interview followed by a technical interview and meeting with the founder at the office. You may be requested to complete a brief in-person exercise at that point.
Please note that this is an On-site/ Work from Office opportunity at our headquarters at Prabhat Road, Pune
We're seeking an experienced Backend Software Engineer to join our team.
As a backend engineer, you will be responsible for designing, developing, and deploying scalable backends for the products we build at NonStop.
This includes APIs, databases, and server-side logic.
Responsibilities:
- Design, develop, and deploy backend systems, including APIs, databases, and server-side logic
- Write clean, efficient, and well-documented code that adheres to industry standards and best practices
- Participate in code reviews and contribute to the improvement of the codebase
- Debug and resolve issues in the existing codebase
- Develop and execute unit tests to ensure high code quality
- Work with DevOps engineers to ensure seamless deployment of software changes
- Monitor application performance, identify bottlenecks, and optimize systems for better scalability and efficiency
- Stay up-to-date with industry trends and emerging technologies; advocate for best practices and new ideas within the team
- Collaborate with cross-functional teams to identify and prioritize project requirements
Requirements:
- At least 2+ years of experience building scalable and reliable backend systems
- Strong proficiency in either of the programming languages such as Python, Node.js, Golang, RoR
- Experience with either of the frameworks such as Django, Express, gRPC
- Knowledge of database systems such as MySQL, PostgreSQL, MongoDB, Cassandra, or Redis
- Familiarity with containerization technologies such as Docker and Kubernetes
- Understanding of software development methodologies such as Agile and Scrum
- Ability to demonstrate flexibility wrt picking a new technology stack and ramping up on the same fairly quickly
- Bachelor's/Master's degree in Computer Science or related field
- Strong problem-solving skills and ability to collaborate effectively with cross-functional teams
- Good written and verbal communication skills in English
We’re looking for a Tech Lead with expertise in ReactJS (Next.js), backend technologies, and database management to join our dynamic team.
Key Responsibilities:
- Lead and mentor a team of 4-6 developers.
- Architect and deliver innovative, scalable solutions.
- Ensure seamless performance while handling large volumes of data without system slowdowns.
- Collaborate with cross-functional teams to meet business goals.
Required Expertise:
- Frontend: ReactJS (Next.js is a must).
- Backend: Experience in Node.js, Python, or Java.
- Databases: SQL (mandatory), MongoDB (nice to have).
- Caching & Messaging: Redis, Kafka, or Cassandra experience is a plus.
- Proven experience in system design and architecture.
- Cloud certification is a bonus.
at Phonologies (India) Private Limited
Job Description
Phonologies is seeking a Senior Data Engineer to lead data engineering efforts for developing and deploying generative AI and large language models (LLMs). The ideal candidate will excel in building data pipelines, fine-tuning models, and optimizing infrastructure to support scalable AI systems for enterprise applications.
Role & Responsibilities
- Data Pipeline Management: Design and manage pipelines for AI model training, ensuring efficient data ingestion, storage, and transformation for real-time deployment.
- LLM Fine-Tuning & Model Lifecycle: Fine-tune LLMs on domain-specific data, and oversee the model lifecycle using tools like MLFlow and Weights & Biases.
- Scalable Infrastructure: Optimize infrastructure for large-scale data processing and real-time LLM performance, leveraging containerization and orchestration in hybrid/cloud environments.
- Data Management: Ensure data quality, security, and compliance, with workflows for handling sensitive and proprietary datasets.
- Continuous Improvement & MLOps: Apply MLOps/LLMOps practices for automation, versioning, and lifecycle management, while refining tools and processes for scalability and performance.
- Collaboration: Work with data scientists, engineers, and product teams to integrate AI solutions and communicate technical capabilities to business stakeholders.
Preferred Candidate Profile
- Experience: 5+ years in data engineering, focusing on AI/ML infrastructure, LLM fine-tuning, and deployment.
- Technical Skills: Advanced proficiency in Python, SQL, and distributed data tools.
- Model Management: Hands-on experience with MLFlow, Weights & Biases, and model lifecycle management.
- AI & NLP Expertise: Familiarity with LLMs (e.g., GPT, BERT) and NLP frameworks like Hugging Face Transformers.
- Cloud & Infrastructure: Strong skills with AWS, Azure, Google Cloud, Docker, and Kubernetes.
- MLOps/LLMOps: Expertise in versioning, CI/CD, and automating AI workflows.
- Collaboration & Communication: Proven ability to work with cross-functional teams and explain technical concepts to non-technical stakeholders.
- Education: Degree in Computer Science, Data Engineering, or related field.
Perks and Benefits
- Competitive Compensation: INR 20L to 30L per year.
- Innovative Work Environment for Personal Growth: Work with cutting-edge AI and data engineering tools in a collaborative setting, for continuous learning in data engineering and AI.
Building the machine learning production (or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. This position is an opportunity for an experienced, server-side developer to build expertise in this exciting new frontier. You will be part of a team deploying state-of-the-art AI solutions for Fractal clients.
Responsibilities
As MLOps Engineer, you will work collaboratively with Data Scientists and Data engineers to deploy and operate advanced analytics machine learning models. You’ll help automate and streamline Model development and Model operations. You’ll build and maintain tools for deployment, monitoring, and operations. You’ll also troubleshoot and resolve issues in development, testing, and production environments.
- Enable Model tracking, model experimentation, Model automation
- Develop ML pipelines to support
- Develop MLOps components in Machine learning development life cycle using Model Repository (either of): MLFlow, Kubeflow Model Registry
- Develop MLOps components in Machine learning development life cycle using Machine Learning Services (either of): Kubeflow, DataRobot, HopsWorks, Dataiku or any relevant ML E2E PaaS/SaaS
- Work across all phases of Model development life cycle to build MLOPS components
- Build the knowledge base required to deliver increasingly complex MLOPS projects on Azure
- Be an integral part of client business development and delivery engagements across multiple domains
Required Qualifications
- 3-5 years experience building production-quality software.
- B.E/B.Tech/M.Tech in Computer Science or related technical degree OR Equivalent
- Strong experience in System Integration, Application Development or Data Warehouse projects across technologies used in the enterprise space
- Knowledge of MLOps, machine learning and docker
- Object-oriented languages (e.g. Python, PySpark, Java, C#, C++)
- CI/CD experience( i.e. Jenkins, Git hub action,
- Database programming using any flavors of SQL
- Knowledge of Git for Source code management
- Ability to collaborate effectively with highly technical resources in a fast-paced environment
- Ability to solve complex challenges/problems and rapidly deliver innovative solutions
- Foundational Knowledge of Cloud Computing on Azure
- Hunger and passion for learning new skills
Building the machine learning production System(or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. This position is an opportunity for an experienced, server-side developer to build expertise in this exciting new frontier. You will be part of a team deploying state-ofthe-art AI solutions for Fractal clients.
Responsibilities
As MLOps Engineer, you will work collaboratively with Data Scientists and Data engineers to deploy and operate advanced analytics machine learning models. You’ll help automate and streamline Model development and Model operations. You’ll build and maintain tools for deployment, monitoring, and operations. You’ll also troubleshoot and resolve issues in development, testing, and production environments.
- Enable Model tracking, model experimentation, Model automation
- Develop scalable ML pipelines
- Develop MLOps components in Machine learning development life cycle using Model Repository (either of): MLFlow, Kubeflow Model Registry
- Machine Learning Services (either of): Kubeflow, DataRobot, HopsWorks, Dataiku or any relevant ML E2E PaaS/SaaS
- Work across all phases of Model development life cycle to build MLOPS components
- Build the knowledge base required to deliver increasingly complex MLOPS projects on Azure
- Be an integral part of client business development and delivery engagements across multiple domains
Required Qualifications
- 5.5-9 years experience building production-quality software
- B.E/B.Tech/M.Tech in Computer Science or related technical degree OR equivalent
- Strong experience in System Integration, Application Development or Datawarehouse projects across technologies used in the enterprise space
- Expertise in MLOps, machine learning and docker
- Object-oriented languages (e.g. Python, PySpark, Java, C#, C++)
- Experience developing CI/CD components for production ready ML pipeline.
- Database programming using any flavors of SQL
- Knowledge of Git for Source code management
- Ability to collaborate effectively with highly technical resources in a fast-paced environment
- Ability to solve complex challenges/problems and rapidly deliver innovative solutions
- Team handling, problem solving, project management and communication skills & creative thinking
- Foundational Knowledge of Cloud Computing on Azure
- Hunger and passion for learning new skills
Responsibilities
- Design and implement advanced solutions utilizing Large Language Models (LLMs).
- Demonstrate self-driven initiative by taking ownership and creating end-to-end solutions.
- Conduct research and stay informed about the latest developments in generative AI and LLMs.
- Develop and maintain code libraries, tools, and frameworks to support generative AI development.
- Participate in code reviews and contribute to maintaining high code quality standards.
- Engage in the entire software development lifecycle, from design and testing to deployment and maintenance.
- Collaborate closely with cross-functional teams to align messaging, contribute to roadmaps, and integrate software into different repositories for core system compatibility.
- Possess strong analytical and problem-solving skills.
- Demonstrate excellent communication skills and the ability to work effectively in a team environment.
Primary Skills
- Generative AI: Proficiency with SaaS LLMs, including Lang chain, llama index, vector databases, Prompt engineering (COT, TOT, ReAct, agents). Experience with Azure OpenAI, Google Vertex AI, AWS Bedrock for text/audio/image/video modalities.
- Familiarity with Open-source LLMs, including tools like TensorFlow/Pytorch and Huggingface. Techniques such as quantization, LLM finetuning using PEFT, RLHF, data annotation workflow, and GPU utilization.
- Cloud: Hands-on experience with cloud platforms such as Azure, AWS, and GCP. Cloud certification is preferred.
- Application Development: Proficiency in Python, Docker, FastAPI/Django/Flask, and Git.
- Natural Language Processing (NLP): Hands-on experience in use case classification, topic modeling, Q&A and chatbots, search, Document AI, summarization, and content generation.
- Computer Vision and Audio: Hands-on experience in image classification, object detection, segmentation, image generation, audio, and video analysis.
About Gyaan:
Gyaan empowers Go-To-Market teams to ascend to new heights in their sales performance, unlocking boundless opportunities for growth. We're passionate about helping sales teams excel beyond expectations. Our pride lies in assembling an unparalleled team and crafting a crucial solution that becomes an indispensable tool for our users. With Gyaan, sales excellence becomes an attainable reality.
About the Job:
Gyaan is seeking an experienced backend developer with expertise in Python, Django, AWS, and Redis to join our dynamic team! As a backend developer, you will be responsible for building responsive and scalable applications using Python, Django, and associated technologies.
Required Qualifications:
- 2+ years of hands-on experience programming in Python, Django
- Good understanding of CI/CD tools (Github Action, Gitlab CI) in a SaaS environment.
- Experience in building and running modern full-stack cloud applications using public cloud technologies such as AWS/
- Proficiency with at least one relational database system like MySQL, Oracle, or PostgreSQL.
- Experience with unit and integration testing.
- Effective communication skills, both written and verbal, to convey complex problems across different levels of the organization and to customers.
- Familiarity with Agile methodologies, software design lifecycle, and design patterns.
- Detail-oriented mindset to identify and rectify errors in code or product development workflow.
- Willingness to learn new technologies and concepts quickly, as the "cloud-native" field evolves rapidly.
Must Have Skills:
- Python
- Django Framework
- AWS
- Redis
- Database Management
Qualifications:
- Bachelor’s degree in Computer Science or equivalent experience.
If you are passionate about solving problems and have the required qualifications, we want to hear from you! You must be an excellent verbal and written communicator, enjoy collaborating with others, and welcome discussing a plan upfront. We offer a competitive salary, flexible work hours, and a dynamic work environment.
Job Title: DevOps Engineer
Job Type: Full-Time
About Us: Nirmitee.io is a leading IT company dedicated to delivering innovative solutions and services. We are looking for a talented DevOps Engineer with strong Python skills to join our dynamic team.
Job Description:
Responsibilities:
- Design, implement, and maintain CI/CD pipelines to ensure smooth deployment processes.
- Automate infrastructure provisioning, configuration, and deployment using tools like Terraform, Ansible, or similar.
- Develop and maintain scripts and tools for system management, monitoring, and automation using Python.
- Collaborate with development teams to ensure seamless integration and deployment of applications.
- Monitor system performance, troubleshoot issues, and ensure high availability and reliability of services.
- Implement and manage containerization technologies such as Docker and orchestration tools like Kubernetes.
- Ensure security best practices are followed in all aspects of the infrastructure and deployment processes.
- Participate in on-call rotations and respond to incidents as needed.
Requirements:
- Bachelor’s degree in Computer Science, Engineering, or a related field.
- Proven experience as a DevOps Engineer or in a similar role.
- Strong proficiency in Python for scripting and automation.
- Experience with CI/CD tools such as Jenkins, GitLab CI, or CircleCI.
- Hands-on experience with infrastructure as code tools like Terraform, Ansible, or Chef.
- Familiarity with cloud platforms such as AWS, Azure, or Google Cloud.
- Knowledge of containerization and orchestration tools like Docker and Kubernetes.
- Understanding of networking, security, and system administration.
- Excellent problem-solving skills and attention to detail.
- Strong communication and collaboration skills.
Preferred Qualifications:
- Experience with monitoring and logging tools such as Prometheus, Grafana, or ELK stack.
- Knowledge of database management and SQL.
- Familiarity with Agile and DevOps methodologies.
- Certification in AWS, Azure, or Google Cloud is a plus.
About Lean Technologies
Lean Technologies is transforming the financial landscape by delivering open banking solutions across the MENA region. Our products have received overwhelmingly positive feedback from both developers and customers, and our recent $33 million Series A round, led by Sequoia, marks our commitment to further expansion and innovation in the GCC. We're dedicated to enabling the next generation of financial innovation and are constantly looking for driven, entrepreneurial individuals to join us on this exciting journey.
About the Role:
As a Senior SDET, you will play a critical role in ensuring the reliability and high performance of our open banking systems. This position involves extensive collaboration with development teams to guarantee the delivery of top-quality products. You will be responsible for creating and executing comprehensive test plans, contributing to the development and maintenance of our in-house automation framework, and analyzing test results to identify potential issues.
Key Responsibilities:
- Implement, maintain, and adapt automation frameworks for open banking solutions.
- Collaborate closely with all stakeholders to understand the full context of deliveries and translate complex functional and non-functional requirements into actionable tasks.
- Ensure Quality Guidelines are met during the development cycle.
- Monitor and report on test results, identifying potential performance issues.
- Identify and recommend improvements to optimize the performance and scalability of open banking systems.
- Participate in code reviews.
Requirements:
- Bachelor’s degree in Computer Science, Software Engineering, or a related field.
- Minimum 5 years of experience in the testing domain, with a focus on automation.
- Proven experience as an Automation Engineer or SDET, with a track record of building and maintaining scalable, reliable test frameworks.
- Strong programming skills in languages such as JavaScript, TypeScript, and Python.
- Experience with automated testing tools and frameworks (e.g., WebdriverIO, Selenium).
- A passion for testing, innovation, and a demonstrated ability to work independently.
- Experience with CI/CD tools like Jenkins.
- Excellent verbal, written, and interpersonal skills.
- Knowledge of performance testing tools such as Grafana and Kibana is a plus.
Why Join Us?
Lean is at the forefront of financial innovation, and this is just the beginning. We are committed to expanding our impact across the region and are always on the lookout for talented individuals passionate about solving complex problems. In addition to competitive compensation, we offer:
- Private healthcare
- Flexible office hours
- Meaningful equity stakes
Salary: INR 15 to INR 30 lakhs per annum
Performance Bonus: Up to 10% of the base salary can be added
Location: Bangalore or Pune
Experience: 2-5 years
About AbleCredit:
AbleCredit is on a mission to solve the Credit Gap of emerging economies. In India alone, the Credit Gap is over USD 5T (Trillion!). This is the single largest contributor to poverty, poor genie index and lack of opportunities. Our Vision is to deploy AI reliably, and safely to solve some of the greatest problems of humanity.
Job Description:
This role is ideal for someone with a strong foundation in deep learning and hands-on experience with AI technologies.
- You will be tasked with solving complex, real-world problems using advanced machine learning models in a privacy-sensitive domain, where your contributions will have a direct impact on business-critical processes.
- As a Machine Learning Engineer at AbleCredit, you will collaborate closely with the founding team, who bring decades of industry expertise to the table.
- You’ll work on deploying cutting-edge Generative AI solutions at scale, ensuring they align with strict privacy requirements and optimize business outcomes.
This is an opportunity for experienced engineers to bring creative AI solutions to one of the most challenging and evolving sectors, while making a significant difference to the company’s growth and success.
Requirements:
- Experience: 2-4 years of hands-on experience in applying machine learning and deep learning techniques to solve complex business problems.
- Technical Skills: Proficiency in standard ML tools and languages, including:
- Python: Strong coding ability for building, training, and deploying machine learning models.
- PyTorch (or MLX or Jax): Solid experience in one or more deep learning frameworks for developing and fine-tuning models.
- Shell scripting: Familiarity with Unix/Linux shell scripting for automation and system-level tasks.
- Mathematical Foundation: Good understanding of the mathematical principles behind machine learning and deep learning (linear algebra, calculus, probability, optimization).
- Problem Solving: A passion for solving tough, ambiguous problems using AI, especially in data-sensitive, large-scale environments.
- Privacy & Security: Awareness and understanding of working in privacy-sensitive domains, adhering to best practices in data security and compliance.
- Collaboration: Ability to work closely with cross-functional teams, including engineers, product managers, and business stakeholders, and communicate technical ideas effectively.
- Work Experience: This position is for experienced candidates only.
Additional Information:
- Location: Pune or Bangalore
- Work Environment: Collaborative and entrepreneurial, with close interactions with the founders.
- Growth Opportunities: Exposure to large-scale AI systems, GenAI, and working in a data-driven privacy-sensitive domain.
- Compensation: Competitive salary and ESOPs, based on experience and performance
- Industry Impact: You’ll be at the forefront of applying Generative AI to solve high-impact problems in the finance/credit space, helping shape the future of AI in the business world.
at ve3 global
About the Company:
We are a dynamic and innovative company committed to delivering exceptional solutions that empower our clients to succeed. With our headquarters in the UK and a global footprint across the US, Noida, and Pune in India, we bring a decade of expertise to every endeavour, driving real results. We take a holistic approach to project delivery, providing end-to-end services that encompass everything from initial discovery and design to implementation, change management, and ongoing support. Our goal is to help clients leverage the full potential of the Salesforce platform to achieve their business objectives.
What Makes VE3 The Best For You We think of your family as our family, no matter the shape or size. We offer maternity leaves, PF Fund Contributions, 5 days working week along with a generous paid time off program that benefits balance your work & personal life.
Job Overview:
We are looking for a talented and experienced Senior Full Stack Web Developer who will be responsible for designing, developing, and implementing software solutions. As a part of our innovative team in Pune, you will work closely with global teams, transforming requirements into technical solutions while maintaining a strong focus on quality and efficiency.
Requirements
Key Responsibilities:
Software Design & Development:
Design software solutions based on requirements and within the constraints of architectural and design guidelines.
Derive software requirements, validate software specifications, and conduct feasibility analysis.
Accurately translate software architecture into design and code.
Technical Leadership:
Guide Scrum team members on design topics and ensure consistency against the design and architecture.
Lead the team in test automation design and implementation.
Identify opportunities for harmonization and reuse of
components/technology.
Coding & Implementation:
Actively participate in coding features and bug-fixing, ensuring adherence to coding and quality guidelines.
Lead by example in delivering solutions for self-owned components.
Collaboration & Coordination:
Collaborate with globally distributed teams to develop scalable and high-quality software products.
Ensure seamless integration and communication across multiple locations.
Required Skills and Qualifications:
Education: Bachelor's degree in Engineering or a related technical field.
Experience: 4-5 years of experience in software design and development.
Technical Skills:
Backend Development:
Strong experience in microservices API development using Java, Python, Spring Cloud.
Proficiency in build tools like Maven.
Frontend Development:
Expertise in full stack web development using JavaScript, Angular, React JS, NodeJS HTML5, and CSS3.
Database Knowledge:
Working knowledge of Oracle/PostgreSQL databases.
Cloud & DevOps:
Hands-on experience with AWS (Lambda, API Gateway, S3, EC2, EKS).
Exposure to CI/CD tools, code analysis, and test automation.
Operating Systems:
Proficiency in both Windows and Unix-based environments.
Nice to Have:
Experience with Terraform for infrastructure automation.
Soft Skills:
Individual Contributor: Ability to work independently while being a strong team player.
Problem-Solving: Strong analytical skills to identify issues and implement effective solutions.
Communication: Excellent verbal and written communication skills for collaboration with global teams.
Benefits
- Competitive salary and benefits package.
- Unlimited Opportunities for professional growth and development.
- Collaborative and supportive work environment.
- Flexible working hours
- Work life balance
- Onsite opportunities
- Retirement Plans
- Team Building activities
- Visit us @ https://www.ve3.global
Job Purpose:
To develop and maintain robust, scalable Python-based applications at Nirmitee.io, contributing to our product engineering and healthcare technology solutions with a focus on efficiency, innovation, and best practices.
Key Responsibilites:
- Python Development:Write clean, efficient, and maintainable Python code
- Develop and maintain scalable applications using Python frameworks like Django or Flask
- Implement RESTful APIs and integrate with front-end technologies
- Database Management:Work with both SQL and NoSQL databases, optimizing queries and ensuring data integrity
- Design and implement database structures to support application requirements
- Cloud and DevOps:Deploy and maintain applications in cloud environments (e.g., AWS, GCP)
- Implement CI/CD pipelines for automated testing and deployment
- Quality Assurance:Write and maintain comprehensive unit tests and integration tests
- Participate in code reviews to ensure high code quality and share knowledge
- Collaboration and Innovation:Work closely with cross-functional teams to deliver integrated solutions
- Stay updated with the latest Python ecosystem developments and suggest improvements
Required Skills And Qualification:
- 3+ years of experience in Python development
- Bachelor's degree in Computer Science, Engineering, or a related field
- Proficiency in Python and its ecosystem (e.g., Django, Flask, FastAPI)
- Experience with SQL and NoSQL databases
- Familiarity with cloud platforms (preferably AWS) and containerization (Docker)
- Understanding of software development best practices and design patterns
Soft Skills:
- Strong problem-solving and analytical skills
- Excellent communication and teamwork abilities
- Self-motivated and able to work independently when required
- Adaptable and eager to learn new technologies
Company Culture:
Nirmitee.io is an innovative IT services company, driven by a passion for technology and a commitment to delivering exceptional solutions in product engineering and healthcare technology. We foster a culture of creativity, collaboration, and continuous learning.
Job Overview:
Python Lead responsibilities include developing and maintaining AI pipelines, including data preprocessing, feature extraction, model training, and evaluation.
Responsibilities:
- Designing, developing, and implementing generative AI models and algorithms utilizing state-of-the-art techniques such as GPT, VAE, and GANs.
- Conducting research to stay up-to-date with the latest advancements in generative AI, machine learning, and deep learning techniques and identify opportunities to integrate them into our products and services.
- 7+ years of Experience in creating rest api using popular python web frameworks like Django, flask or fastapi.
- Knowledge of databases like postgres, elastic, mongo etc.
- Knowledge of working with external integrations like redis, kafka, s3, ec2 etc.
- Some experience in ML integrations will be a plus.
Requirements:
- Work experience as a Python Developer
- Team spirit
- Good problem-solving skills
- Proficient in Python and have experience with machine learning libraries and frameworks such as TensorFlow, PyTorch, or Keras.
- strong knowledge of data structures, algorithms, and software engineering principles
- Nice to have experience with natural language processing (NLP) techniques and tools, such as SpaCy, NLTK, or Hugging Face
Job Description
We are looking for a talented Java Developer to work in abroad countries. You will be responsible for developing high-quality software solutions, working on both server-side components and integrations, and ensuring optimal performance and scalability.
Preferred Qualifications
- Experience with microservices architecture.
- Knowledge of cloud platforms (AWS, Azure).
- Familiarity with Agile/Scrum methodologies.
- Understanding of front-end technologies (HTML, CSS, JavaScript) is a plus.
Requirment Details
Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent experience).
Proven experience as a Java Developer or similar role.
Strong knowledge of Java programming language and its frameworks (Spring, Hibernate).
Experience with relational databases (e.g., MySQL, PostgreSQL) and ORM tools.
Familiarity with RESTful APIs and web services.
Understanding of version control systems (e.g., Git).
Solid understanding of object-oriented programming (OOP) principles.
Strong problem-solving skills and attention to detail.
About Lean Technologies
Lean is on a mission to revolutionize the fintech industry by providing developers with a universal API to access their customers' financial accounts across the Middle East. We’re breaking down infrastructure barriers and empowering the growth of the fintech industry. With Sequoia leading our $33 million Series A round, Lean is poised to expand its coverage across the region while continuing to deliver unparalleled value to developers and stakeholders.
Join us and be part of a journey to enable the next generation of financial innovation. We offer competitive salaries, private healthcare, flexible office hours, and meaningful equity stakes to ensure long-term alignment. At Lean, you'll work on solving complex problems, build a lasting legacy, and be part of a diverse, inclusive, and equal opportunity workplace.
About the role:
Are you a highly motivated and experienced software engineer looking to take your career to the next level? Our team at Lean is seeking a talented engineer to help us build the distributed systems that allow our engineering teams to deploy our platform in multiple geographies across various deployment solutions. You will work closely with functional heads across software, QA, and product teams to deliver scalable and customizable release pipelines.
Responsibilities
- Distributed systems architecture – understand and manage the most complex systems
- Continual reliability and performance optimization – enhancing observability stack to improve proactive detection and resolution of issues
- Employing cutting-edge methods and technologies, continually refining existing tools to enhance performance and drive advancements
- Problem-solving capabilities – troubleshooting complex issues and proactively reducing toil through automation
- Experience in technical leadership and setting technical direction for engineering projects
- Collaboration skills – working across teams to drive change and provide guidance
- Technical expertise – depth skills and ability to act as subject matter expert in one or more of: IAAC, observability, coding, reliability, debugging, system design
- Capacity planning – effectively forecasting demand and reacting to changes
- Analyze and improve efficiency, scalability, and stability of various system resources
- Incident response – rapidly detecting and resolving critical incidents. Minimizing customer impact through effective collaboration, escalation (including periodic on-call shifts) and postmortems
Requirements
- 10+ years of experience in Systems Engineering, DevOps, or SRE roles running large-scale infrastructure, cloud, or web services
- Strong background in Linux/Unix Administration and networking concepts
- We work on OCI but would accept candidates with solid GCP/AWS or other cloud providers’ knowledge and experience
- 3+ years of experience with managing Kubernetes clusters, Helm, Docker
- Experience in operating CI/CD pipelines that build and deliver services on the cloud and on-premise
- Work with CI/CD tools/services like Jenkins/GitHub-Actions/ArgoCD etc.
- Experience with configuration management tools either Ansible, Chef, Puppet, or equivalent
- Infrastructure as Code - Terraform
- Experience in production environments with both relational and NoSQL databases
- Coding with one or more of the following: Java, Python, and/or Go
Bonus
- MultiCloud or Hybrid Cloud experience
- OCI and GCP
Why Join Us?
At Lean, we value talent, drive, and entrepreneurial spirit. We are constantly on the lookout for individuals who identify with our mission and values, even if they don’t meet every requirement. If you're passionate about solving hard problems and building a legacy, Lean is the right place for you. We are committed to equal employment opportunities regardless of race, color, ancestry, religion, gender, sexual orientation, or disability.
Availability: Full time
Location: Pune, India
Experience: 4- 6 years
Tvarit Solutions Private Limited (wholly owned subsidiary of TVARIT GmbH, Germany). TVARIT provides software to reduce manufacturing waste like scrap, energy, and machine downtime using its patented technology. With its software products, and highly competent team from renowned Universities, TVARIT has gained customer trust across 4 continents within a short span of 5 years. TVARIT is awarded among the top 8 out of 490 AI companies by European Data Incubator, apart from many more awards by the German government and industrial organizations making TVARIT one of the most innovative AI companies in Germany and Europe.
We are looking for a passionate Full Stack Developer Level 2 to join our technology team in Pune. You will be responsible for handling operations, design, development, testing, leading the software development team and working toward infrastructure development that will support the company’s solutions. You will get an opportunity to work closely on projects that will involve the automation of the manufacturing process.
Key responsibilities
- Creating Plugins for Open-Source framework Grafana using React & Golang.
- Develop pixel-perfect implementation of the front end using React.
- Design efficient DB interaction to optimize performance.
- Interact with and build Python APIs.
- Collaborate across teams and lead/train the junior developers.
- Design and maintain functional requirement documents and guides.
- Get feedback from, and build solutions for, users and customers.
Must have worked on these technologies.
- 2+ years of experience working with React-Typescript on a production level
- Experience with API creation using node.js or Python
- GitHub or any other SVC
- Have worked with any Linux/Unix-based operating system (Ubuntu, Debian, MacOS, etc)
Good to have experience:
- Python-based backend technologies, relational and no-relational databases, Python Web Frameworks (Django or Flask)
- Experience with the Go programming language
- Experience working with Grafana, or on any other micro frontend architecture framework
- Experience with Docker
- Leading a team for at least a year
Benefits and perks:
- Culture of innovation, creativity, learning, and even failure, we believe in bringing out the best in you.
- Progressive leave policy for effective work-life balance.
- Get mentored by highly qualified internal resource groups and opportunities to avail industry-driven mentorship programs.
- Multicultural peer groups and supportive workplace policies.
- Work from beaches, hills, mountains, and many more with the yearly workcation program; we believe in mixing elements of vacation and work.
How it's like to work for a Startup?
Working for TVARIT (deep-tech German IT Startup) can offer you a unique blend of innovation, collaboration, and growth opportunities. But it's essential to approach it with a willingness to adapt and thrive in a dynamic environment.
If this position sparked your interest, do apply today!
By submitting my documents for the recruitment process, I agree that my data will be processed for the purpose of the recruitment process and stored for an additional 6 months after the process is completed. Without your consent, we unfortunately cannot consider your documents for the recruitment process. You can revoke your consent at any time. Further information on how we process your data can be found in our privacy policy at the following link: https://tvarit.com/privacy-policy/
Durch das Abschicken meiner Unterlagen für den Rrecruitingprozess erkläre ich mich damit einverstanden, dass meine Daten zum Zweck des Recruitingprozesses verarbeitet und nach Abschluss des Recruitingprozesses für weitere 6 Monate gespeichert werden. Ohne dein Einverständnis können wir deine Unterlagen für den Rrecruitingprozess leider nicht berücksichtigen. Du kannst dein Einverständnis jederzeit widerrufen. Weitere Informationen, wie wir deine Daten verarbeiten findest du in unserer Datenschutzerklärung unter folgendem Link: https://tvarit.com/privacy-policy/
at Wissen Technology
Job Requirements:
Intermediate Linux Knowledge
- Experience with shell scripting
- Familiarity with Linux commands such as grep, awk, sed
- Required
Advanced Python Scripting Knowledge
- Strong expertise in Python
- Required
Ruby
- Nice to have
Basic Knowledge of Network Protocols
- Understanding of TCP/UDP, Multicast/Unicast
- Required
Packet Captures
- Experience with tools like Wireshark, tcpdump, tshark
- Nice to have
High-Performance Messaging Libraries
- Familiarity with tools like Tibco, 29West, LBM, Aeron
- Nice to have
About Jeeva.ai
At Jeeva.ai, we're on a mission to revolutionize the future of work by building AI employees that automate all manual tasks—starting with AI Sales Reps. Our vision is simple: "Anything that doesn’t require deep human connection can be automated & done better, faster & cheaper with AI." We’ve created a fully automated SDR using AI that generates 3x more pipeline than traditional sales teams at a fraction of the cost.
As a dynamic startup we are backed by Alt Capital (founded by Jack Altman & Sam Altman), Marc Benioff (CEO Salesforce), Gokul (Board Coinbase), Bonfire (investors in ChowNow), Techtsars (investors in Uber), Sapphire (investors in LinkedIn), Microsoft with $1M ARR in just 3 months after launch, we’re not just growing - we’re thriving and making a significant impact in the world of artificial intelligence.
As we continue to scale, we're looking for mid-senior Full Stack Engineers who are passionate, ambitious, and eager to make an impact in the AI-driven future of work.
About You
- Experience: 3+ years of experience as a Full Stack Engineer with a strong background in React, Python, MongoDB, and AWS.
- Automated CI/CD: Experienced in implementing and managing automated CI/CD pipelines using GitHub Actions and AWS Cloudformation.
- System Architecture: Skilled in architecting scalable solutions for systems at scale, leveraging caching strategies, messaging queues and async/await paradigms for highly performant systems
- Cloud-Native Expertise: Proficient in deploying cloud-native apps using AWS (Lambda, API Gateway, S3, ECS), with a focus on serverless architectures to reduce overhead and boost agility..
- Development Tooling: Proficient in a wide range of development tools such as FastAPI, React State Management, REST APIs, Websockets and robust version control using Git.
- AI and GPTs: Competent in applying AI technologies, particularly in using GPT models for natural language processing, automation and creating intelligent systems.
- Impact-Driven: You've built and shipped products that users love and have seen the impact of your work at scale.
- Ownership: You take pride in owning projects from start to finish and are comfortable wearing multiple hats to get the job done.
- Curious Learner: You stay ahead of the curve, eager to explore and implement the latest technologies, particularly in AI.
- Collaborative Spirit: You thrive in a team environment and can work effectively with both technical and non-technical stakeholders.
- Ambitious: You have a hunger for success and are eager to contribute to a fast-growing company with big goals.
What You’ll Be Doing
- Build and Innovate: Develop and scale AI-driven products like Gigi (AI Outbound SDR), Jim (AI Inbound SDR), Automate across voice & video with AI.
- Collaborate Across Teams: Work closely with our Product, GTM, and Engineering teams to deliver world-class AI solutions that drive massive value for our customers.
- Integrate and Optimize: Create seamless integrations with popular platforms like Salesforce, LinkedIn, and HubSpot, enhancing our AI’s capabilities.
- Problem Solving: Tackle challenging problems head-on, from data pipelines to user experience, ensuring that every solution is both functional and delightful.
- Drive AI Adoption: Be a key player in transforming how businesses operate by automating workflows, lead generation, and more with AI.
Who are we looking for?
We are looking for a Senior Data Scientist, who will design and develop data-driven solutions using state-of-the-art methods. You should be someone with strong and proven experience in working on data-driven solutions. If you feel you’re enthusiastic about transforming business requirements into insightful data-driven solutions, you are welcome to join our fast-growing team to unlock your best potential.
Job Summary
- Supporting company mission by understanding complex business problems through data-driven solutions.
- Designing and developing machine learning pipelines in Python and deploying them in AWS/GCP, ...
- Developing end-to-end ML production-ready solutions and visualizations.
- Analyse large sets of time-series industrial data from various sources, such as production systems, sensors, and databases to draw actionable insights and present them via custom dashboards.
- Communicating complex technical concepts and findings to non-technical stakeholders of the projects
- Implementing the prototypes using suitable statistical tools and artificial intelligence algorithms.
- Preparing high-quality research papers and participating in conferences to present and report experimental results and research findings.
- Carrying out research collaborating with internal and external teams and facilitating review of ML systems for innovative ideas to prototype new models.
Qualification and experience
- B.Tech/Masters/Ph.D. in computer science, electrical engineering, mathematics, data science, and related fields.
- 5+ years of professional experience in the field of machine learning, and data science.
- Experience with large-scale Time-series data-based production code development is a plus.
Skills and competencies
- Familiarity with Docker, and ML Libraries like PyTorch, sklearn, pandas, SQL, and Git is a must.
- Ability to work on multiple projects. Must have strong design and implementation skills.
- Ability to conduct research based on complex business problems.
- Strong presentation skills and the ability to collaborate in a multi-disciplinary team.
- Must have programming experience in Python.
- Excellent English communication skills, both written and verbal.
Benefits and Perks
- Culture of innovation, creativity, learning, and even failure, we believe in bringing out the best in you.
- Progressive leave policy for effective work-life balance.
- Get mentored by highly qualified internal resource groups and opportunity to avail industry-driven mentorship program, as we believe in empowering people.
- Multicultural peer groups and supportive workplace policies.
- Work from beaches, hills, mountains, and many more with the yearly workcation program; we believe in mixing elements of vacation and work.
Hiring Process
- Call with Talent Acquisition Team: After application screening, a first-level screening with the talent acquisition team to understand the candidate's goals and alignment with the job requirements.
- First Round: Technical round 1 to gauge your domain knowledge and functional expertise.
- Second Round: In-depth technical round and discussion about the departmental goals, your role, and expectations.
- Final HR Round: Culture fit round and compensation discussions.
- Offer: Congratulations you made it!
If this position sparked your interest, apply now to initiate the screening process.
TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes Tvarit one of the most innovative AI companies in Germany and Europe.
We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English.
We are seeking a skilled and motivated Data Engineer from the manufacturing Industry with over two years of experience to join our team. As a data engineer, you will be responsible for designing, building, and maintaining the infrastructure required for the collection, storage, processing, and analysis of large and complex data sets. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives.
Skills Required
- Experience in the manufacturing industry (metal industry is a plus)
- 2+ years of experience as a Data Engineer
- Experience in data cleaning & structuring and data manipulation
- ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines.
- Python: Strong proficiency in Python programming for data manipulation, transformation, and automation.
- Experience in SQL and data structures
- Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache and NoSQL databases.
- Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform.
- Proficient in data management and data governance
- Strong analytical and problem-solving skills.
- Excellent communication and teamwork abilities.
Nice To Have
- Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database).
- Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud.
TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes TVARIT one of the most innovative AI companies in Germany and Europe.
We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English.
We are seeking a skilled and motivated senior Data Engineer from the manufacturing Industry with over four years of experience to join our team. The Senior Data Engineer will oversee the department’s data infrastructure, including developing a data model, integrating large amounts of data from different systems, building & enhancing a data lake-house & subsequent analytics environment, and writing scripts to facilitate data analysis. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives.
Skills Required:
- Experience in the manufacturing industry (metal industry is a plus)
- 4+ years of experience as a Data Engineer
- Experience in data cleaning & structuring and data manipulation
- Architect and optimize complex data pipelines, leading the design and implementation of scalable data infrastructure, and ensuring data quality and reliability at scale
- ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines.
- Python: Strong proficiency in Python programming for data manipulation, transformation, and automation.
- Experience in SQL and data structures
- Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache, and NoSQL databases.
- Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform.
- Proficient in data management and data governance
- Strong analytical experience & skills that can extract actionable insights from raw data to help improve the business.
- Strong analytical and problem-solving skills.
- Excellent communication and teamwork abilities.
Nice To Have:
- Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database).
- Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud.
- Bachelor’s degree in computer science, Information Technology, Engineering, or a related field from top-tier Indian Institutes of Information Technology (IIITs).
- Benefits And Perks
- A culture that fosters innovation, creativity, continuous learning, and resilience
- Progressive leave policy promoting work-life balance
- Mentorship opportunities with highly qualified internal resources and industry-driven programs
- Multicultural peer groups and supportive workplace policies
- Annual workcation program allowing you to work from various scenic locations
- Experience the unique environment of a dynamic start-up
Why should you join TVARIT ?
Working at TVARIT, a deep-tech German IT startup, offers a unique blend of innovation, collaboration, and growth opportunities. We seek individuals eager to adapt and thrive in a rapidly evolving environment.
If this opportunity excites you and aligns with your career aspirations, we encourage you to apply today!
at DeepIntent
With a core belief that advertising technology can measurably improve the lives of patients, DeepIntent is leading the healthcare advertising industry into the future. Built purposefully for the healthcare industry, the DeepIntent Healthcare Advertising Platform is proven to drive higher audience quality and script performance with patented technology and the industry’s most comprehensive health data. DeepIntent is trusted by 600+ pharmaceutical brands and all the leading healthcare agencies to reach the most relevant healthcare provider and patient audiences across all channels and devices. For more information, visit DeepIntent.com or find us on LinkedIn.
We are seeking a skilled and experienced Site Reliability Engineer (SRE) to join our dynamic team. The ideal candidate will have a minimum of 3 years of hands-on experience in managing and maintaining production systems, with a focus on reliability, scalability, and performance. As an SRE at Deepintent, you will play a crucial role in ensuring the stability and efficiency of our infrastructure, as well as contributing to the development of automation and monitoring tools.
Responsibilities:
- Deploy, configure, and maintain Kubernetes clusters for our microservices architecture.
- Utilize Git and Helm for version control and deployment management.
- Implement and manage monitoring solutions using Prometheus and Grafana.
- Work on continuous integration and continuous deployment (CI/CD) pipelines.
- Containerize applications using Docker and manage orchestration.
- Manage and optimize AWS services, including but not limited to EC2, S3, RDS, and AWS CDN.
- Maintain and optimize MySQL databases, Airflow, and Redis instances.
- Write automation scripts in Bash or Python for system administration tasks.
- Perform Linux administration tasks and troubleshoot system issues.
- Utilize Ansible and Terraform for configuration management and infrastructure as code.
- Demonstrate knowledge of networking and load-balancing principles.
- Collaborate with development teams to ensure applications meet reliability and performance standards.
Additional Skills (Good to Know):
- Familiarity with ClickHouse and Druid for data storage and analytics.
- Experience with Jenkins for continuous integration.
- Basic understanding of Google Cloud Platform (GCP) and data center operations.
Qualifications:
- Minimum 3 years of experience in a Site Reliability Engineer role or similar.
- Proven experience with Kubernetes, Git, Helm, Prometheus, Grafana, CI/CD, Docker, and microservices architecture.
- Strong knowledge of AWS services, MySQL, Airflow, Redis, AWS CDN.
- Proficient in scripting languages such as Bash or Python.
- Hands-on experience with Linux administration.
- Familiarity with Ansible and Terraform for infrastructure management.
- Understanding of networking principles and load balancing.
Education:
Bachelor's degree in Computer Science, Information Technology, or a related field.
DeepIntent is committed to bringing together individuals from different backgrounds and perspectives. We strive to create an inclusive environment where everyone can thrive, feel a sense of belonging, and do great work together.
DeepIntent is an Equal Opportunity Employer, providing equal employment and advancement opportunities to all individuals. We recruit, hire and promote into all job levels the most qualified applicants without regard to race, color, creed, national origin, religion, sex (including pregnancy, childbirth and related medical conditions), parental status, age, disability, genetic information, citizenship status, veteran status, gender identity or expression, transgender status, sexual orientation, marital, family or partnership status, political affiliation or activities, military service, immigration status, or any other status protected under applicable federal, state and local laws. If you have a disability or special need that requires accommodation, please let us know in advance.
DeepIntent’s commitment to providing equal employment opportunities extends to all aspects of employment, including job assignment, compensation, discipline and access to benefits and training.
TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes TVARIT one of the most innovative AI companies in Germany and Europe.
Requirements:
- Python Experience: Minimum 3+ years.
- Software Development Experience: Minimum 8+ years.
- Data Engineering and ETL Workloads: Minimum 2+ years.
- Familiarity with Software Development Life Cycle (SDLC).
- CI/CD Pipeline Development: Experience in developing CI/CD pipelines for large projects.
- Agile Framework & Sprint Methodology: Experience with Jira.
- Source Version Control: Experience with GitHub or similar SVC.
- Team Leadership: Experience leading a team of software developers/data scientists.
Good to Have:
- Experience with Golang.
- DevOps/Cloud Experience (preferably AWS).
- Experience with React and TypeScript.
Responsibilities:
- Mentor and train a team of data scientists and software developers.
- Lead and guide the team in best practices for software development and data engineering.
- Develop and implement CI/CD pipelines.
- Ensure adherence to Agile methodologies and participate in sprint planning and execution.
- Collaborate with the team to ensure the successful delivery of projects.
- Provide on-site support and training in Pune.
Skills and Attributes:
- Strong leadership and mentorship abilities.
- Excellent problem-solving skills.
- Effective communication and teamwork.
- Ability to work in a fast-paced environment.
- Passionate about technology and continuous learning.
Note: This is a part-time position paid on an hourly basis. The initial commitment is 4-8 hours per week, with potential fluctuations.
Join TVARIT and be a pivotal part of shaping the future of software development and data engineering.
Greetings , Wissen Technology is Hiring for the position of Data Engineer
Please find the Job Description for your Reference:
JD
- Design, develop, and maintain data pipelines on AWS EMR (Elastic MapReduce) to support data processing and analytics.
- Implement data ingestion processes from various sources including APIs, databases, and flat files.
- Optimize and tune big data workflows for performance and scalability.
- Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.
- Manage and monitor EMR clusters, ensuring high availability and reliability.
- Develop ETL (Extract, Transform, Load) processes to cleanse, transform, and store data in data lakes and data warehouses.
- Implement data security best practices to ensure data is protected and compliant with relevant regulations.
- Create and maintain technical documentation related to data pipelines, workflows, and infrastructure.
- Troubleshoot and resolve issues related to data processing and EMR cluster performance.
Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- 5+ years of experience in data engineering, with a focus on big data technologies.
- Strong experience with AWS services, particularly EMR, S3, Redshift, Lambda, and Glue.
- Proficiency in programming languages such as Python, Java, or Scala.
- Experience with big data frameworks and tools such as Hadoop, Spark, Hive, and Pig.
- Solid understanding of data modeling, ETL processes, and data warehousing concepts.
- Experience with SQL and NoSQL databases.
- Familiarity with CI/CD pipelines and version control systems (e.g., Git).
- Strong problem-solving skills and the ability to work independently and collaboratively in a team environment
Job Description:
Ideal experience required – 6-8 years
- Mandatory hands-on experience in .Net Core (8)
- Mandatory hands-on experience in Angular (10+ version required)
- Azure and Microservice Architecture experience is good to have.
- No database or domain constraint
Skills:
- 7 to 10 years of working experience in managing .net projects closely with internal and external clients in structured contexts in an international environment.
- Strong knowledge of .Net Core, .NET MVC, C#, SQL Server & JavaScript
- Working experience in Angular
- Familiar with various design and architectural patterns
- Should be familiar with Git source code management for code repository.
- Should be able to write clean, readable, and easily maintainable code.
- Understanding of fundamental design principles for building a scalable application
- Experience in implementing automated testing platforms and unit test.
Nice to have:
- AWS
- Elastic Search
- Mongo DB
Responsibilities:
- Should be able to handle modules/project independently with minor supervision.
- Should be good in troubleshooting and problem-solving skills.
- Should be able to take complete ownership of modules and projects.
- Should be able to communicate and coordinate with multiple teams.
- Must have good verbal & written communication skill.
at Monsoonfish
Preferred Skills:
- Experience with XML-based web services (SOAP, REST).
- Knowledge of database technologies (SQL, NoSQL) for XML data storage.
- Familiarity with version control systems (Git, SVN).
- Understanding of JSON and other data interchange formats.
- Certifications in XML technologies are a plus.
Sr. Data Engineer (Data Warehouse-Snowflake)
Experience: 5+yrs
Location: Pune (Hybrid)
As a Senior Data engineer with Snowflake expertise you are a subject matter expert who is curious and an innovative thinker to mentor young professionals. You are a key person to convert Vision and Data Strategy for Data solutions and deliver them. With your knowledge you will help create data-driven thinking within the organization, not just within Data teams, but also in the wider stakeholder community.
Skills Preferred
- Advanced written, verbal, and analytic skills, and demonstrated ability to influence and facilitate sustained change. Ability to convey information clearly and concisely to all levels of staff and management about programs, services, best practices, strategies, and organizational mission and values.
- Proven ability to focus on priorities, strategies, and vision.
- Very Good understanding in Data Foundation initiatives, like Data Modelling, Data Quality Management, Data Governance, Data Maturity Assessments and Data Strategy in support of the key business stakeholders.
- Actively deliver the roll-out and embedding of Data Foundation initiatives in support of the key business programs advising on the technology and using leading market standard tools.
- Coordinate the change management process, incident management and problem management process.
- Ensure traceability of requirements from Data through testing and scope changes, to training and transition.
- Drive implementation efficiency and effectiveness across the pilots and future projects to minimize cost, increase speed of implementation and maximize value delivery
Knowledge Preferred
- Extensive knowledge and hands on experience with Snowflake and its different components like User/Group, Data Store/ Warehouse management, External Stage/table, working with semi structured data, Snowpipe etc.
- Implement and manage CI/CD for migrating and deploying codes to higher environments with Snowflake codes.
- Proven experience with Snowflake Access control and authentication, data security, data sharing, working with VS Code extension for snowflake, replication, and failover, optimizing SQL, analytical ability to troubleshoot and debug on development and production issues quickly is key for success in this role.
- Proven technology champion in working with relational, Data warehouses databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Highly Experienced in building and optimizing complex queries. Good with manipulating, processing, and extracting value from large, disconnected datasets.
- Your experience in handling big data sets and big data technologies will be an asset.
- Proven champion with in-depth knowledge of any one of the scripting languages: Python, SQL, Pyspark.
Primary responsibilities
- You will be an asset in our team bringing deep technical skills and capabilities to become a key part of projects defining the data journey in our company, keen to engage, network and innovate in collaboration with company wide teams.
- Collaborate with the data and analytics team to develop and maintain a data model and data governance infrastructure using a range of different storage technologies that enables optimal data storage and sharing using advanced methods.
- Support the development of processes and standards for data mining, data modeling and data protection.
- Design and implement continuous process improvements for automating manual processes and optimizing data delivery.
- Assess and report on the unique data needs of key stakeholders and troubleshoot any data-related technical issues through to resolution.
- Work to improve data models that support business intelligence tools, improve data accessibility and foster data-driven decision making.
- Ensure traceability of requirements from Data through testing and scope changes, to training and transition.
- Manage and lead technical design and development activities for implementation of large-scale data solutions in Snowflake to support multiple use cases (transformation, reporting and analytics, data monetization, etc.).
- Translate advanced business data, integration and analytics problems into technical approaches that yield actionable recommendations, across multiple, diverse domains; communicate results and educate others through design and build of insightful presentations.
- Exhibit strong knowledge of the Snowflake ecosystem and can clearly articulate the value proposition of cloud modernization/transformation to a wide range of stakeholders.
Relevant work experience
Bachelors in a Science, Technology, Engineering, Mathematics or Computer Science discipline or equivalent with 7+ Years of experience in enterprise-wide data warehousing, governance, policies, procedures, and implementation.
Aptitude for working with data, interpreting results, business intelligence and analytic best practices.
Business understanding
Good knowledge and understanding of Consumer and industrial products sector and IoT.
Good functional understanding of solutions supporting business processes.
Skill Must have
- Snowflake 5+ years
- Overall different Data warehousing techs 5+ years
- SQL 5+ years
- Data warehouse designing experience 3+ years
- Experience with cloud and on-prem hybrid models in data architecture
- Knowledge of Data Governance and strong understanding of data lineage and data quality
- Programming & Scripting: Python, Pyspark
- Database technologies such as Traditional RDBMS (MS SQL Server, Oracle, MySQL, PostgreSQL)
Nice to have
- Demonstrated experience in modern enterprise data integration platforms such as Informatica
- AWS cloud services: S3, Lambda, Glue and Kinesis and API Gateway, EC2, EMR, RDS, Redshift and Kinesis
- Good understanding of Data Architecture approaches
- Experience in designing and building streaming data ingestion, analysis and processing pipelines using Kafka, Kafka Streams, Spark Streaming, Stream sets and similar cloud native technologies.
- Experience with implementation of operations concerns for a data platform such as monitoring, security, and scalability
- Experience working in DevOps, Agile, Scrum, Continuous Delivery and/or Rapid Application Development environments
- Building mock and proof-of-concepts across different capabilities/tool sets exposure
- Experience working with structured, semi-structured, and unstructured data, extracting information, and identifying linkages across disparate data sets
Primary Skills
DynamoDB, Java, Kafka, Spark, Amazon Redshift, AWS Lake Formation, AWS Glue, Python
Skills:
Good work experience showing growth as a Data Engineer.
Hands On programming experience
Implementation Experience on Kafka, Kinesis, Spark, AWS Glue, AWS Lake Formation.
Excellent knowledge in: Python, Scala/Java, Spark, AWS (Lambda, Step Functions, Dynamodb, EMR), Terraform, UI (Angular), Git, Mavena
Experience of performance optimization in Batch and Real time processing applications
Expertise in Data Governance and Data Security Implementation
Good hands-on design and programming skills building reusable tools and products Experience developing in AWS or similar cloud platforms. Preferred:, ECS, EKS, S3, EMR, DynamoDB, Aurora, Redshift, Quick Sight or similar.
Familiarity with systems with very high volume of transactions, micro service design, or data processing pipelines (Spark).
Knowledge and hands-on experience with server less technologies such as Lambda, MSK, MWAA, Kinesis Analytics a plus.
Expertise in practices like Agile, Peer reviews, Continuous Integration
Roles and responsibilities:
Determining project requirements and developing work schedules for the team.
Delegating tasks and achieving daily, weekly, and monthly goals.
Responsible for designing, building, testing, and deploying the software releases.
Salary: 25LPA-40LPA
Job Description:
· Proficient In Python.
· Good knowledge of Stress/Load Testing and Performance Testing.
· Knowledge in Linux.
About the role
As a full-stack engineer, you’ll feel at home if you are hands-on, grounded, opinionated and passionate about building things using technology. Our tech stack ranges widely with language ecosystems like Typescript, Java, Scala, Golang, Kotlin, Elixir, Python, .Net, Nodejs and even Rust.
This role is ideal for those looking to have a large impact and a huge scope for growth while still being hands-on with technology. We aim to allow growth without becoming “post-technical”. We are extremely selective with our consultants and are able to run our teams with fewer levels of management. You won’t find a BA or iteration manager here! We work in small pizza teams of 2-5 people where a well-founded argument holds more weight than the years of experience. You will have the opportunity to work with clients across domains like retail, banking, publishing, education, ad tech and more where you will take ownership of developing software solutions that are purpose-built to solve our clients’ unique business and technical needs.
Responsibilities
- Produce high-quality code that allows us to put solutions into production.
- Utilize DevOps tools and practices to build and deploy software.
- Collaborate with Data Scientists and Engineers to deliver production-quality AI and Machine Learning systems.
- Build frameworks and supporting tooling for data ingestion from a complex variety of sources. Work in short sprints to deliver working software with clear deliverables and client-led deadlines.
- Willingness to be a polyglot developer and learn multiple technologies.
Skills you’ll need
- A maker’s mindset. To be resourceful and have the ability to do things that have no instructions.
- Extensive experience (at least 10 years) as a Software Engineer.
- Deep understanding of programming fundamentals and expertise with at least one programming language (functional or object-oriented).
- A nuanced and rich understanding of code quality, maintainability and practices like Test Driven Development.
- Experience with one or more source control and build toolchains.
- Working knowledge of CI/CD will be an added advantage.
- Understanding of web APIs, contracts and communication protocols.
- Understanding of Cloud platforms, infra-automation/DevOps, IaC/GitOps/Containers, design and development of large data platforms.
What will you experience in terms of culture at Sahaj?
- A culture of trust, respect and transparency
- Opportunity to collaborate with some of the finest minds in the industry
- Work across multiple domains
What are the benefits of being at Sahaj?
- Unlimited leaves
- Life Insurance & Private Health insurance paid by Sahaj
- Stock options
- No hierarchy
- Open Salaries
We are looking for QA role who has experience into Python ,AWS,and chaos engineering tool(Monkey,Gremlin)
⦁ Strong understanding of distributed systems
- Cloud computing (AWS), and networking principles.
- Ability to understand complex trading systems and prepare and execute plans to induce failures
- Python.
- Experience with chaos engineering tooling such as Chaos Monkey, Gremlin, or similar
Domain: - Investment Banking or Electronic Trading is mandatory
- Develop (Python/Py test) automation tests in all components (e.g. API testing, client-server testing, E2E testing etc.) to meet product requirements and customer usages
- Hands-On experience in Python
- Proficiency in test automation frameworks and tools such as Selenium, Cucumber.
- Experience working in a Microsoft Windows and Linux environment
- Experience using Postman and automated API testing
- Experience designing & executing load/stress and performance testing
- Experience using test cases & test execution management tools and issues management tools (e.g Jira), and development environments (like Visual Studio, IntelliJ, or Eclipse).
Technical Skills:
- Ability to understand and translate business requirements into design.
- Proficient in AWS infrastructure components such as S3, IAM, VPC, EC2, and Redshift.
- Experience in creating ETL jobs using Python/PySpark.
- Proficiency in creating AWS Lambda functions for event-based jobs.
- Knowledge of automating ETL processes using AWS Step Functions.
- Competence in building data warehouses and loading data into them.
Responsibilities:
- Understand business requirements and translate them into design.
- Assess AWS infrastructure needs for development work.
- Develop ETL jobs using Python/PySpark to meet requirements.
- Implement AWS Lambda for event-based tasks.
- Automate ETL processes using AWS Step Functions.
- Build data warehouses and manage data loading.
- Engage with customers and stakeholders to articulate the benefits of proposed solutions and frameworks.
Publicis Sapient Overview:
The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
.
Job Summary:
As Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. You are also required to have hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms.
Role & Responsibilities:
Your role is focused on Design, Development and delivery of solutions involving:
• Data Integration, Processing & Governance
• Data Storage and Computation Frameworks, Performance Optimizations
• Analytics & Visualizations
• Infrastructure & Cloud Computing
• Data Management Platforms
• Implement scalable architectural models for data processing and storage
• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time mode
• Build functionality for data analytics, search and aggregation
Experience Guidelines:
Mandatory Experience and Competencies:
# Competency
1.Overall 5+ years of IT experience with 3+ years in Data related technologies
2.Minimum 2.5 years of experience in Big Data technologies and working exposure in at least one cloud platform on related data services (AWS / Azure / GCP)
3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline.
4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable
5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc
6.Well-versed and working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
Preferred Experience and Knowledge (Good to Have):
# Competency
1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience
2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures
4.Performance tuning and optimization of data pipelines
5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality
6.Cloud data specialty and other related Big data technology certifications
Personal Attributes:
• Strong written and verbal communication skills
• Articulation skills
• Good team player
• Self-starter who requires minimal oversight
• Ability to prioritize and manage multiple tasks
• Process orientation and the ability to define and set up processes
A modern work platform means a single source of truth for your desk and deskless employees alike, where everything they need is organized and easy to find.
MangoApps was designed to unify your employee experience by combining intranet, communication, collaboration, and training into one intuitive, mobile-accessible workspace.
We are looking for a highly capable machine learning engineer to optimize our machine learning systems. You will be evaluating existing machine learning (ML) processes, performing statistical analysis to resolve data set problems, and enhancing the accuracy of our AI software's predictive automation capabilities.
To ensure success as a machine learning engineer, you should demonstrate solid data science knowledge and experience in a related ML role. A machine learning engineer will be someone whose expertise translates into the enhanced performance of predictive automation software.
AI/ML Engineer Responsibilities:
- Designing machine learning systems and self-running artificial intelligence (AI) software to automate predictive models.
- Transforming data science prototypes and applying appropriate ML algorithms and tools.
- Ensuring that algorithms generate accurate user recommendations.
- Turning unstructured data into useful information by auto-tagging images and text-to-speech conversions.
- Solving complex problems with multi-layered data sets, as well as optimizing existing machine learning libraries and frameworks.
- Developing ML algorithms to huge volumes of historical data to make predictions.
- Running tests, performing statistical analysis, and interpreting test results.
- Documenting machine learning processes.
- Keeping abreast of developments in machine learning.
AI/ML Engineer Requirements:
- Bachelor's degree in computer science, data science, mathematics, or a related field with at least 3+yrs of experience as an AI/ML Engineer
- Advanced proficiency with Python and FastAPI framework along with good exposure to libraries like scikit-learn, Pandas, NumPy etc..
- Experience in working on ChatGPT, LangChain (Must), Large Language Models (Good to have) & Knowledge Graphs
- Extensive knowledge of ML frameworks, libraries, data structures, data modelling, and software architecture.
- In-depth knowledge of mathematics, statistics, and algorithms.
- Superb analytical and problem-solving abilities.
- Great communication and collaboration skills.
Why work with us
- We take delight in what we do, and it shows in the products we offer and ratings of our products by leading industry analysts like IDC, Forrester and Gartner OR independent sites like Capterra.
- Be part of the team that has a great product-market fit, solving some of the most relevant communication and collaboration challenges faced by big and small organizations across the globe.
- MangoApps is highly collaborative place and careers at MangoApps come with a lot of growth and learning opportunities. If you’re looking to make an impact, MangoApps is the place for you.
- We focus on getting things done and know how to have fun while we do them. We have a team that brings creativity, energy, and excellence to every engagement.
- A workplace that was listed as one of the top 51 Dream Companies to work for by World HRD Congress in 2019.
- As a group, we are flat and treat everyone the same.
Benefits
We are a young organization and growing fast. Along with the fantastic workplace culture that helps you meet your career aspirations; we provide some comprehensive benefits.
1. Comprehensive Health Insurance for Family (Including Parents) with no riders attached.
2. Accident Insurance for each employee.
3. Sponsored Trainings, Courses and Nano Degrees.
About You
· Self-motivated: You can work with a minimum of supervision and be capable of strategically prioritizing multiple tasks in a proactive manner.
· Driven: You are a driven team player, collaborator, and relationship builder whose infectious can-do attitude inspires others and encourages great performance in a fast-moving environment.
· Entrepreneurial: You thrive in a fast-paced, changing environment and you’re excited by the chance to play a large role.
· Passionate: You must be passionate about online collaboration and ensuring our clients are successful; we love seeing hunger and ambition.
· Thrive in a start-up mentality with a “whatever it takes” attitude.
Publicis Sapient Overview:
The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
.
Job Summary:
As Senior Associate L1 in Data Engineering, you will do technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. Having hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms will be preferable.
Role & Responsibilities:
Job Title: Senior Associate L1 – Data Engineering
Your role is focused on Design, Development and delivery of solutions involving:
• Data Ingestion, Integration and Transformation
• Data Storage and Computation Frameworks, Performance Optimizations
• Analytics & Visualizations
• Infrastructure & Cloud Computing
• Data Management Platforms
• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time
• Build functionality for data analytics, search and aggregation
Experience Guidelines:
Mandatory Experience and Competencies:
# Competency
1.Overall 3.5+ years of IT experience with 1.5+ years in Data related technologies
2.Minimum 1.5 years of experience in Big Data technologies
3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.
4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable
5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc
Preferred Experience and Knowledge (Good to Have):
# Competency
1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience
2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures
4.Performance tuning and optimization of data pipelines
5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality
6.Working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
7.Cloud data specialty and other related Big data technology certifications
Job Title: Senior Associate L1 – Data Engineering
Personal Attributes:
• Strong written and verbal communication skills
• Articulation skills
• Good team player
• Self-starter who requires minimal oversight
• Ability to prioritize and manage multiple tasks
• Process orientation and the ability to define and set up processes
Dear Connections,
We are hiring! Join our dynamic team as a QA Automation Tester (Python, Java, Selenium, API, SQL, Git)! We're seeking a passionate professional to contribute to our innovative projects. If you thrive in a collaborative environment, possess expertise in Python, Java, Selenium, and Robot Framework, and are ready to make an impact, apply now! Wissen Technology is committed to fostering innovation, growth, and collaboration. Don't miss this chance to be part of something extraordinary.
Company Overview:
Wissen is the preferred technology partner for executing transformational projects and accelerating implementation through thought leadership and a solution mindset. It is a leading IT solutions and consultancy firm dedicated to providing innovative and customized solutions to global clients. We leverage cutting-edge technologies to empower businesses and drive digital transformation.
#jobopportunity #hiringnow #joinourteam #career #wissen #QA #automationtester #robot #apiautomation #sql #java #python #selenium
About Company:
Our client is the industry-leading provider of CRM messaging solutions. As a forward-thinking global company, it continues to innovate and develop cutting-edge solutions that redefine how businesses digitally communicate with their customers. It works with 2500 customers across 190 countries with customers ranging from SMBs to large global enterprises.
About the role:
The Director of Product Management is responsible for overseeing and implementing product development policies, objectives, and initiatives as well as leading research for new products, product enhancements, and product design.
Roles & responsibilities:
- Become a product expert on all company's solutions
- Build and own the product roadmap and timeline.
- Develop and execute a go-to-market strategy that addresses product, pricing, messaging, competitive positioning, product launch and promotion.
- Work with Development leaders to oversee development resources, including managing ROI, timelines, and deliverables.
- Work with the leadership team on driving product strategy, in both new and existing products, to increase overall market share, revenue and customer loyalty.
- Implement and communicate the strategic and technical direction for the department.
- Engage directly with customers to understand market needs and product requirements.
- Develop/implement a suite of Key Performance Indicators (KPI's) to measure product performance including profitability, customer satisfaction metrics, compliance, and delivery efficiency.
- Define and measure value of software solutions to establish and quantify customer ROI.
- Represent the company by visiting customers to solicit feedback on company products and services.
- Monitors and reports progress of projects within agreed upon timeframes.
- Write very high quality BRD, PRDs, Epics and User Stories
- Creates functional strategies and specific objectives as well as develops budgets, policies, and procedures.
- Creates and analyzes financial proposals related to product development and provides supporting content showing allocation of funds to execute these plans.
- Write status updates, iteration delivery and release notes as necessary
- Display a high level of critical thinking in cross-functional process analysis and problem resolution for new and existing products.
- Develop & conduct specialized training on new products launched and raise awareness & application of relevant subject matter.
- Monitor internal processes for efficiency and validity pre & post product launch/changes.
Requirements:
- Excellent communication skills, both verbal and in writing.
- Strong customer focus paired with exceptional presentation skills.
- Skilled at data analytics focused on identifying opportunities, driving insights, and measuring value.
- Strong problem-solving skills.
- Ability to work effectively in a diverse team environment.
- Proven strategic and tactical leadership, motivation, and decision-making skills
Required Education & Experience:
- Bachelor's Degree in Technology related field.
- Experience in working with a geographically diverse development team.
- Strong technical background with the ability to understand and discuss technical concepts.
- Proven experience in Software Development and Product Management.
- 12+ years of experience leading product teams in a fast-paced business environment as Product Leader on Software Platform or SaaS solution.
- Proven ability to lead and influence cross-functional teams.
- Demonstrated success in delivering high-impact products.
Preferred Qualifications
- Transition from software development role to product management.
- Experience building messaging solutions or marketing or support solutions.
- Experience with agile development methodologies.
- Familiarity with design thinking principles.
- Knowledge of relevant technologies and industry trends.
- Strong project management skills.
Title/Role: Python Django Consultant
Experience: 8+ Years
Work Location: Indore / Pune /Chennai / Vadodara
Notice period: Immediate to 15 Days Max
Key Skills: Python, Django, Crispy Forms, Authentication, Bootstrap, jQuery, Server Side Rendered, SQL, Azure, React, Django DevOps
Job Description:
- Should have knowledge and created forms using Django. Crispy forms is a plus point.
- Must have leadership experience
- Should have good understanding of function based and class based views.
- Should have good understanding about authentication (JWT and Token authentication)
- Django – at least one senior with deep Django experience. The other 1 or 2 can be mid to senior python or Django
- FrontEnd – Must have React/ Angular, CSS experience
- Database – Ideally SQL but most senior has solid DB experience
- Cloud – Azure preferred but agnostic
- Consulting / client project background ideal.
Django Stack:
- Django
- Server Side Rendered HTML
- Bootstrap
- jQuery
- Azure SQL
- Azure Active Directory
- Server Side Rendered/jQuery is older tech but is what we are ok with for internal tools. This is a good combination of late adopter agile stack integrated within an enterprise. Potentially we can push them to React for some discreet projects or pages that need more dynamism.
Django Devops:
- Should have expertise with deploying and managing Django in Azure.
- Django deployment to Azure via Docker.
- Django connection to Azure SQL.
- Django auth integration with Active Directory.
- Terraform scripts to make this setup seamless.
- Easy, proven to deployment / setup to AWS, GCP.
- Load balancing, more advanced services, task queues, etc.
We are looking for a hands-on technical expert who has worked with multiple technology stacks and has experience architecting and building scalable cloud solutions with web and mobile frontends.
What will you work on?
- Interface with clients
- Recommend tech stacks
- Define end-to-end logical and cloud-native architectures
- Define APIs
- Integrate with 3rd party systems
- Create architectural solution prototypes
- Hands-on coding, team lead, code reviews, and problem-solving
What Makes You A Great Fit?
- 5+ years of software experience
- Experience with architecture of technology systems having hands-on expertise in backend, and web or mobile frontend
- Solid expertise and hands-on experience in Python with Flask or Django
- Expertise on one or more cloud platforms (AWS, Azure, Google App Engine)
- Expertise with SQL and NoSQL databases (MySQL, Mongo, ElasticSearch, Redis)
- Knowledge of DevOps practices
- Chatbot, Machine Learning, Data Science/Big Data experience will be a plus
- Excellent communication skills, verbal and written
The job is for a full-time position at our https://goo.gl/maps/o67FWr1aedo">Pune (Viman Nagar) office.
(Note: We are working remotely at the moment. However, once the COVID situation improves, the candidate will be expected to work from our office.)
Hiring alert 🚨
Calling all #PythonDevelopers looking for an #ExcitingJobOpportunity 🚀 with one of our #Insurtech clients.
Are you a Junior Python Developer eager to grow your skills in #BackEnd development?
Our company is looking for someone like you to join our dynamic team. If you're passionate about Python and ready to learn from seasoned developers, this role is for you!
📣 About the company
The client is a fast-growing consultancy firm, helping P&C Insurance companies on their digital journey. With offices in Mumbai and New York, they're at the forefront of insurance tech. Plus, they offer a hybrid work culture with flexible timings, typically between 9 to 5, to accommodate your work-life balance.
💡 What you’ll do
📌 Work with other developers.
📌 Implement Python code with assistance from senior developers.
📌 Write effective test cases such as unit tests to ensure it is meeting the software design requirements.
📌 Ensure Python code when executed is efficient and well written.
📌 Refactor old Python code to ensure it follows modern principles.
📌 Liaise with stakeholders to understand the requirements.
📌 Ensure integration can take place with front end systems.
📌 Identify and fix code where bugs have been identified.
🔎 What you’ll need
📌 Minimum 3 years of experience writing AWS Lambda using Python
📌 Knowledge of other AWS services like CloudWatch and API Gateway
📌 Fundamental understanding of Python and its frameworks.
📌 Ability to write simple SQL queries
📌 Familiarity with AWS Lambda deployment
📌 The ability to problem-solve.
📌 Fast learner with an ability to adapt techniques based on requirements.
📌 Knowledge of how to effectively test Python code.
📌 Great communication and collaboration skills.
Full Stack Developer Job Description
Position: Full Stack Developer
Department: Technology/Engineering
Location: Pune
Type: Full Time
Job Overview:
As a Full Stack Developer at Invvy Consultancy & IT Solutions, you will be responsible for both front-end and back-end development, playing a crucial role in designing and implementing user-centric web applications. You will collaborate with cross-functional teams including designers, product managers, and other developers to create seamless, intuitive, and high-performance digital solutions.
Responsibilities:
Front-End Development:
Develop visually appealing and user-friendly front-end interfaces using modern web technologies such as C# Coding, HTML5, CSS3, and JavaScript frameworks (e.g., React, Angular, Vue.js).
Collaborate with UX/UI designers to ensure the best user experience and responsive design across various devices and platforms.
Implement interactive features, animations, and dynamic content to enhance user engagement.
Optimize application performance for speed and scalability.
Back-End Development:
Design, develop, and maintain the back-end architecture using server-side technologies (e.g., Node.js, Python, Ruby on Rails, Java, .NET).
Create and manage databases, including data modeling, querying, and optimization.
Implement APIs and web services to facilitate seamless communication between front-end and back-end systems.
Ensure security and data protection by implementing proper authentication, authorization, and encryption measures.
Collaborate with DevOps teams to deploy and manage applications in cloud environments (e.g., AWS, Azure, Google Cloud).
Qualifications:
Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent experience).
Proven experience as a Full Stack Developer or similar role.
Proficiency in front-end development technologies like HTML5, CSS3, JavaScript, and popular frameworks (React, Angular, Vue.js, etc.).
Strong experience with back-end programming languages and frameworks (Node.js, Python, Ruby on Rails, Java, .NET, etc.).
Familiarity with database systems (SQL and NoSQL) and their integration with web applications.
Knowledge of web security best practices and application performance optimization.
at DeepIntent
Who We Are:
DeepIntent is leading the healthcare advertising industry with data-driven solutions built for the future. From day one, our mission has been to improve patient outcomes through the artful use of advertising, data science, and real-world clinical data.
What You’ll Do:
We are looking for a Senior Software Engineer based in Pune, India who can master both DeepIntent’s data architectures and pharma research and analytics methodologies to make significant contributions to how health media is analyzed by our clients. This role requires an Engineer who not only understands DBA functions but also how they impact research objectives and can work with researchers and data scientists to achieve impactful results.
This role will be in the Analytics Organization and will require integration and partnership with the Engineering Organization. The ideal candidate is a self-starter who is inquisitive who is not afraid to take on and learn from challenges and will constantly seek to improve the facets of the business they manage. The ideal candidate will also need to demonstrate the ability to collaborate and partner with others.
- Serve as the Engineering interface between Analytics and Engineering teams
- Develop and standardized all interface points for analysts to retrieve and analyze data with a focus on research methodologies and data based decisioning
- Optimize queries and data access efficiencies, serve as expert in how to most efficiently attain desired data points
- Build “mastered” versions of the data for Analytics specific querying use cases
- Help with data ETL, table performance optimization
- Establish formal data practice for the Analytics practice in conjunction with rest of DeepIntent
- Build & operate scalable and robust data architectures
- Interpret analytics methodology requirements and apply to data architecture to create standardized queries and operations for use by analytics teams
- Implement DataOps practices
- Master existing and new Data Pipelines and develop appropriate queries to meet analytics specific objectives
- Collaborate with various business stakeholders, software engineers, machine learning engineers, analysts
- Operate between Engineers and Analysts to unify both practices for analytics insight creation
Who You Are:
- Adept in market research methodologies and using data to deliver representative insights
- Inquisitive, curious, understands how to query complicated data sets, move and combine data between databases
- Deep SQL experience is a must
- Exceptional communication skills with ability to collaborate and translate with between technical and non technical needs
- English Language Fluency and proven success working with teams in the U.S.
- Experience in designing, developing and operating configurable Data pipelines serving high volume and velocity data
- Experience working with public clouds like GCP/AWS
- Good understanding of software engineering, DataOps, and data architecture, Agile and DevOps methodologies
- Experience building Data architectures that optimize performance and cost, whether the components are prepackaged or homegrown
- Proficient with SQL,Python or JVM based language, Bash
- Experience with any of Apache open source projects such as Spark, Druid, Beam, Airflow etc.and big data databases like BigQuery, Clickhouse, etc
- Ability to think big, take bets and innovate, dive deep, hire and develop the best talent, learn and be curious
- Comfortable to work in EST Time Zone
at DeepIntent
DeepIntent is leading the healthcare advertising industry with data-driven solutions built for the future. From day one, our mission has been to improve patient outcomes through the artful use of advertising, data science, and real-world clinical data.
What You’ll Do:
We are looking for a talented candidate with several years of experience in software Quality Assurance to join our QA team. This position will be at an individual contributor level as part of a collaborative, fast-paced team. As a member of the QA team, you will work closely with Product Managers and Developers to understand application features and create robust comprehensive test plans, write test cases, and work closely with the developers to make the applications more testable. We are looking for a well-rounded candidate with solid analytical skills, an enthusiasm for taking ownership of features, a strong commitment to quality, and the ability to work closely and communicate effectively with development and other teams. Experience with the following is preferred:
- Python
- Perl
- Shell Scripting
- Selenium
- Test Automation (QA)
- Software Testing (QA)
- Software Development (MUST HAVE)
- SDET (MUST HAVE)
- MySQL
- CI/CD
Who You Are:
- Hands on Experience with QA Automation Framework development & Design (Preferred language Python)
- Strong understanding of testing methodologies
- Scripting
- Strong problem analysis and troubleshooting skills
- Experience in databases, preferably MySQL
- Debugging skills
- REST/API testing experience is a plus
- Integrate end-to-end tests with CI/CD pipelines and monitor and improve metrics around test coverage
- Ability to work in a dynamic and agile development environment and be adaptable to changing requirements
- Performance testing experience with relevant automation and monitoring tools
- Exposure to Dockerization or Virtualization is a plus
- Experience working in the Linux/Unix environment
- Basic understanding of OS
DeepIntent is committed to bringing together individuals from different backgrounds and perspectives. We strive to create an inclusive environment where everyone can thrive, feel a sense of belonging, and do great work together.
DeepIntent is an Equal Opportunity Employer, providing equal employment and advancement opportunities to all individuals. We recruit, hire and promote into all job levels the most qualified applicants without regard to race, color, creed, national origin, religion, sex (including pregnancy, childbirth and related medical conditions), parental status, age, disability, genetic information, citizenship status, veteran status, gender identity or expression, transgender status, sexual orientation, marital, family or partnership status, political affiliation or activities, military service, immigration status, or any other status protected under applicable federal, state and local laws. If you have a disability or special need that requires accommodation, please let us know in advance.
DeepIntent’s commitment to providing equal employment opportunities extends to all aspects of employment, including job assignment, compensation, discipline and access to benefits and training.
The role is with a Fintech Credit Card company based in Pune within the Decision Science team. (OneCard )
About
Credit cards haven't changed much for over half a century so our team of seasoned bankers, technologists, and designers set out to redefine the credit card for you - the consumer. The result is OneCard - a credit card reimagined for the mobile generation. OneCard is India's best metal credit card built with full-stack tech. It is backed by the principles of simplicity, transparency, and giving back control to the user.
The Engineering Challenge
“Re-imaging credit and payments from First Principles”
Payments is an interesting engineering challenge in itself with requirements of low latency, transactional guarantees, security, and high scalability. When we add credit and engagement into the mix, the challenge becomes even more interesting with underwriting and recommendation algorithms working on large data sets. We have eliminated the current call center, sales agent, and SMS-based processes with a mobile app that puts the customers in complete control. To stay agile, the entire stack is built on the cloud with modern technologies.
Purpose of Role :
- Develop and implement the collection analytics and strategy function for the credit cards. Use analysis and customer insights to develop optimum strategy.
CANDIDATE PROFILE :
- Successful candidates will have in-depth knowledge of statistical modelling/data analysis tools (Python, R etc.), techniques. They will be an adept communicator with good interpersonal skills to work with senior stake holders in India to grow revenue primarily through identifying / delivering / creating new, profitable analytics solutions.
We are looking for someone who:
- Proven track record in collection and risk analytics preferably in Indian BFSI industry. This is a must.
- Identify & deliver appropriate analytics solutions
- Experienced in Analytics team management
Essential Duties and Responsibilities :
- Responsible for delivering high quality analytical and value added services
- Responsible for automating insights and proactive actions on them to mitigate collection Risk.
- Work closely with the internal team members to deliver the solution
- Engage Business/Technical Consultants and delivery teams appropriately so that there is a shared understanding and agreement as to deliver proposed solution
- Use analysis and customer insights to develop value propositions for customers
- Maintain and enhance the suite of suitable analytics products.
- Actively seek to share knowledge within the team
- Share findings with peers from other teams and management where required
- Actively contribute to setting best practice processes.
Knowledge, Experience and Qualifications :
Knowledge :
- Good understanding of collection analytics preferably in Retail lending industry.
- Knowledge of statistical modelling/data analysis tools (Python, R etc.), techniques and market trends
- Knowledge of different modelling frameworks like Linear Regression, Logistic Regression, Multiple Regression, LOGIT, PROBIT, time- series modelling, CHAID, CART etc.
- Knowledge of Machine learning & AI algorithms such as Gradient Boost, KNN, etc.
- Understanding of decisioning and portfolio management in banking and financial services would be added advantage
- Understanding of credit bureau would be an added advantage
Experience :
- 4 to 8 years of work experience in core analytics function of a large bank / consulting firm.
- Experience on working on Collection analytics is must
- Experience on handling large data volumes using data analysis tools and generating good data insights
- Demonstrated ability to communicate ideas and analysis results effectively both verbally and in writing to technical and non-technical audiences
- Excellent communication, presentation and writing skills Strong interpersonal skills
- Motivated to meet and exceed stretch targets
- Ability to make the right judgments in the face of complexity and uncertainty
- Excellent relationship and networking skills across our different business and geographies
Qualifications :
- Masters degree in Statistics, Mathematics, Economics, Business Management or Engineering from a reputed college
About UpSolve
We built and deliver complex AI solutions which help drive business decisions faster and more accurately. We are a typical AI company and have a range of solutions developed on Video, Image and Text.
What you will do
- Stay informed on new technologies and implement cautiously
- Maintain necessary documentation for the project
- Fix the issues reported by application users
- Plan, build, and design solutions with a mental note of future requirements
- Coordinate with the development team to manage fixes, code changes, and merging
Location: Mumbai
Working Mode: Remote
What are we looking for
- Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.
- Minimum 2 years of professional experience in software development, with a focus on machine learning and full stack development.
- Strong proficiency in Python programming language and its machine learning libraries such as TensorFlow, PyTorch, or scikit-learn.
- Experience in developing and deploying machine learning models in production environments.
- Proficiency in web development technologies including HTML, CSS, JavaScript, and front-end frameworks such as React, Angular, or Vue.js.
- Experience in designing and developing RESTful APIs and backend services using frameworks like Flask or Django.
- Knowledge of databases and SQL for data storage and retrieval.
- Familiarity with version control systems such as Git.
- Strong problem-solving and analytical skills.
- Excellent communication and collaboration abilities.
- Ability to work effectively in a fast-paced and dynamic team environment.
- Good to have Cloud Exposure