
Purple Hirez
https://purplehirez.comAbout
Connect with the team
Jobs at Purple Hirez

The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
We are looking for a experienced engineer with superb technical skills. You will primarily be responsible for architecting and building large scale systems that delivers AI and Analytical solutions to our customers. The right candidate will enthusiastically take ownership in developing and managing a continuously improving, robust, scalable software solutions. The successful candidate will be curious, creative, ambitious, self motivated, flexible, and have a bias towards taking action. As part of the early engineering team, you will have a chance to make a measurable impact as well as having a significant amount of responsibility.
Although your primary responsibilities will be around back-end work, we prize individuals who are willing to step in and contribute to other areas including automation, tooling, and management applications. Experience with or desire to learn Machine Learning a plus.
Who you are:
- Bachelors/Masters/Phd in CS or equivalent industry experience
- Expert level knowledge in Python
- Expert level of Atleast one Web Framework such As Django, Flask, FastAPI. FastAPI Preferred.
- Understand and implement Microservices, Restful APIs, distributed systems using Cloud Native Principles
- Knowledge and Experience Integrating and Contributing to Open Source Projects and Frameworks.
- Repeated Experience Building Secure, Reliable, Scalable Platforms
- Experience With Data Control Patterns And ORM Libraries
- Implementing automated testing platforms and unit tests
- Proficient understanding of code versioning tools, such as Git
- Familiarity with continuous integration, Jenkins
What you will do:
- Co-Lead Design And Architecture of our core platform components that powers AI/ML Platform
- Integrate Data Storage Solutions Including RDBMS And Key-Value Stores
- Write Reusable, Testable, And Efficient Code
- Design And Implementation Of Low-Latency, High-Availability, Scalable and Asynchronous Applications
- Integration Of User-Facing Elements Developed By Front-End Developers With Server Side Logic
- Design Internal And Customer Facing API’s
- Support Enterprise Customers During Implementation And Launch Of Service
Experience:
- Python: 5 years (Required)
- FastAPI: 2 years (Preferred)
- REST: 5 years (Required)
The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
Be a part of the growth story of a rapidly growing organization in AI. We are seeking a passionate Machine Learning (ML) Engineer, with a strong background in developing and deploying state-of-the-art models on Cloud. You will participate in the complete cycle of building machine learning models from conceptualization of ideas, data preparation, feature selection, training, evaluation, and productionization.
On a typical day, you might build data pipelines, develop a new machine learning algorithm, train a new model or deploy the trained model on the cloud. You will have a high degree of autonomy, ownership, and influence over your work, machine learning organizations' evolution, and the direction of the company.
Required Qualifications
- Bachelor's degree in computer science/electrical engineering or equivalent practical experience
- 7+ years of Industry experience in Data Science, ML/AI projects. Experience in productionizing machine learning in the industry setting
- Strong grasp of statistical machine learning, linear algebra, deep learning, and computer vision
- 3+ years experience with one or more general-purpose programming languages including but not limited to: R, Python.
- Experience with PyTorch or TensorFlow or other ML Frameworks.
- Experience in using Cloud services such as AWS, GCP, Azure. Understand the principles of developing cloud-native application development
In this role you will:
- Design and implement ML components, systems and tools to automate and enable our various AI industry solutions
- Apply research methodologies to identify the machine learning models to solve a business problem and deploy the model at scale.
- Own the ML pipeline from data collection, through the prototype development to production.
- Develop high-performance, scalable, and maintainable inference services that communicate with the rest of our tech stack
The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
- 5+ years of industry experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka Streams, Py Spark, and streaming databases like druid or equivalent like Hive
- Strong industry expertise with containerization technologies including kubernetes (EKS/AKS), Kubeflow
- Experience with cloud platform services such as AWS, Azure or GCP especially with EKS, Managed Kafka
- 5+ Industry experience in python
- Experience with popular modern web frameworks such as Spring boot, Play framework, or Django
- Experience with scripting languages. Python experience highly desirable. Experience in API development using Swagger
- Implementing automated testing platforms and unit tests
- Proficient understanding of code versioning tools, such as Git
- Familiarity with continuous integration, Jenkins
Responsibilities
- Architect, Design and Implement Large scale data processing pipelines using Kafka Streams, PySpark, Fluentd and Druid
- Create custom Operators for Kubernetes, Kubeflow
- Develop data ingestion processes and ETLs
- Assist in dev ops operations
- Design and Implement APIs
- Identify performance bottlenecks and bugs, and devise solutions to these problems
- Help maintain code quality, organization, and documentation
- Communicate with stakeholders regarding various aspects of solution.
- Mentor team members on best practices
The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
- Must possess a fair, clear understanding of fundamentals and concepts of Magento 1/2, PHP.
- Must have strong experience in Magento Extension development.
- Write well-engineered source code that complies with accepted web standards.
- Strong experience of Magento Best Practices, including experience developing custom extensions and extending third party extensions.
- Thorough functional and code level knowledge of all Magento products and all relevant commerce technologies
- Good experience on Caching & Performance improvement.

The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
- 3+ years of industry experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka, ELK Stack, Fluentd and streaming databases like druid
- Strong industry expertise with containerization technologies including kubernetes, docker-compose
- 2+ years of industry in experience in developing scalable data ingestion processes and ETLs
- Experience with cloud platform services such as AWS, Azure or GCP especially with EKS, Managed Kafka
- Experience with scripting languages. Python experience highly desirable.
- 2+ Industry experience in python
- Experience with popular modern web frameworks such as Spring boot, Play framework, or Django
- Demonstrated expertise of building cloud native applications
- Experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka, ELK Stack, Fluentd
- Experience in API development using Swagger
- Strong expertise with containerization technologies including kubernetes, docker-compose
- Experience with cloud platform services such as AWS, Azure or GCP.
- Implementing automated testing platforms and unit tests
- Proficient understanding of code versioning tools, such as Git
- Familiarity with continuous integration, Jenkins
- Design and Implement Large scale data processing pipelines using Kafka, Fluentd and Druid
- Assist in dev ops operations
- Develop data ingestion processes and ETLs
- Design and Implement APIs
- Assist in dev ops operations
- Identify performance bottlenecks and bugs, and devise solutions to these problems
- Help maintain code quality, organization, and documentation
- Communicate with stakeholders regarding various aspects of solution.
- Mentor team members on best practices
The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
Job Description
We are looking for an experienced engineer with superb technical skills. Primarily be responsible for architecting and building large scale data pipelines that delivers AI and Analytical solutions to our customers. The right candidate will enthusiastically take ownership in developing and managing a continuously improving, robust, scalable software solutions.
Although your primary responsibilities will be around back-end work, we prize individuals who are willing to step in and contribute to other areas including automation, tooling, and management applications. Experience with or desire to learn Machine Learning a plus.
Skills
- Bachelors/Masters/Phd in CS or equivalent industry experience
- Demonstrated expertise of building and shipping cloud native applications
- 5+ years of industry experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka Streams, Py Spark, and streaming databases like druid or equivalent like Hive
- Strong industry expertise with containerization technologies including kubernetes (EKS/AKS), Kubeflow
- Experience with cloud platform services such as AWS, Azure or GCP especially with EKS, Managed Kafka
- 5+ Industry experience in python
- Experience with popular modern web frameworks such as Spring boot, Play framework, or Django
- Experience with scripting languages. Python experience highly desirable. Experience in API development using Swagger
- Implementing automated testing platforms and unit tests
- Proficient understanding of code versioning tools, such as Git
- Familiarity with continuous integration, Jenkins
Responsibilities
- Architect, Design and Implement Large scale data processing pipelines using Kafka Streams, PySpark, Fluentd and Druid
- Create custom Operators for Kubernetes, Kubeflow
- Develop data ingestion processes and ETLs
- Assist in dev ops operations
- Design and Implement APIs
- Identify performance bottlenecks and bugs, and devise solutions to these problems
- Help maintain code quality, organization, and documentation
- Communicate with stakeholders regarding various aspects of solution.
- Mentor team members on best practices
The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
Technical skills:
- Working experience in using the any of the design principles MVVM or MVP
- Experience in using Web Services and SQLite data base
- Experience in Multithreading concepts – Rx JAVA
- Experience in using version control systems like SVN, GIT Hub, Bitbucket etc
- Experience in using Dependency injection principles in android
- Experience in using design patterns
- Experience integration with 3rd party API’s like Facebook, gmail, retrofit, Picasso & 3rd party libraries
- Experience/Knowledge in using google Flutter
The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
- Deep knowledge and working experience in any of Adobe Marketo or Hybris Marketing or Emarsys platforms with end to end hands-on implementation and integration experience including integration tools
- Other platforms of interest include Adobe Campaign Emarsys Pardot Salesforce Marketing Cloud and Eloqua
- Hybris Marketing or Adobe Marketo Certification strongly preferred
- Experience in evaluating different marketing platforms (e.g. Salesforce SFMC versus Adobe Marketo versus Adobe Marketing)
- Experience in design and setup of Marketing Technology Architecture
- Proficient in scripting and experience with HTML XML CSS JavaScript etc.
- Experience with triggering campaigns via API calls
- Responsible for the detailed design of technical solutions POV on marketing solutions Proof-of-Concepts (POC) prototyping and documentation of the technical design
- Collaborate with Onshore team in tailoring solutions that meets business needs using agile/iterative development process
- Perform feasibility analysis for the marketing solution that meet the business goals
- Experience with client discovery workshops and technical solutions presentations
- Excellent communication skills required as this is a client business & IT interfacing role

The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
- Min 5 years of hands-on experience in Java Spring-boot technologies
- Experience with monolithic applications
- Experience using Redis and RabbitMQ
- Experience with RDBMS such as SQLServer and My SQL
- Strong analytical, problem solving and data analysis
- Excellent communication, presentation and interpersonal skills are a must
- Micro service frameworks such as Java SpringBoot
- Design and implement automated unit and integration tests
The recruiter has not been active on this job recently. You may apply but please expect a delayed response.
As a Senior Technical lead you will be a member of a software development team building innovative new features for the application. Lead the team as Senior Full Stack developer
Primary Job Responsibilities:
- Inherit a well-architected, clean, and robust codebase built with .NET Core 5.x, and JavaScript and Angular
- Evaluate and implement new libraries and components
- Ensure best practices are followed in the design and development of the application Contribute to and help manage our open-source libraries
- Strong knowledge of C# and .NET Core, JavaScript, Angular
- NoSQL databases (MongoDB)Real-time web applications (Web Sockets / SignalR)
- Containerization technologies (Docker, Kubernetes) Swagger /OpenAPI Initiative
- Strong knowledge of design patterns and practices
- Experience in non-Microsoft tech stacks as Node, Python, Angular and others are also crucial components of Source Control - GitHub Unit / Performance / Load Testing
- Experience with Continuous Integration - Jenkins/Travis/Circle/etc.
- Experience working in Agile Environments - JIRA/Slack
- Working knowledge of SAAS Architecture and Deployments - AWS/Google Cloud/etc.
Similar companies
About the company
Beyond Seek is a team of R.A.R.E individuals who're solving impactful problems using the best tools available today!
Jobs
2
About the company
Twinline is driving a financial revolution for NBFCs and MFIs through its flagship platform, Finpage-a secure, scalable SaaS solution built to simplify and transform microlending. From loan origination to regulatory reporting, Finpage streamlines operations, reduces turnaround time from 14 days to just 1, and cuts errors to 0.5%.
Trusted by 40+ clients and managing ₹25,000 Cr+ in AUM, Twinline is reshaping inclusive finance—digitally, intelligently, and at scale.
Jobs
1
About the company
The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015. Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains.
With offices in US, India, UK, Australia, Mexico, and Canada, we offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
Leveraging our multi-site operations in the USA and India and availability of world-class infrastructure, we offer a combination of on-site, off-site and offshore service models. Our technical competencies, proactive management approach, proven methodologies, committed support and the ability to quickly react to urgent needs make us a valued partner for any kind of Digital Enablement Services, Managed Services, or Business Services.
We believe that the technology and thought leadership that we command in the industry is the direct result of the kind of people we have been able to attract, to form this organization (you are one of them!).
Our workforce consists of 1000+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like MIT, Wharton, IITs, IIMs, and BITS and with rich work experience in some of the biggest companies in the world.
Wissen Technology has been certified as a Great Place to Work®. The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.
Jobs
408
About the company
Founded in 2014 by Gajan Mohanarajah and Arudchelvan Krishnamoorthy, Rapyuta Robotics is a leading innovator in cloud robotics for warehouse automation. Headquartered in Tokyo with global offices including Bengaluru, we empower businesses to build, deploy, monitor, and scale autonomous mobile robot systems over the cloud.
Our platform, Rapyuta.io, manages everything from localization and motion planning to multi-robot coordination and fleet management. We also build hardware solutions like Pick-Assist AMRs (PA-AMR), Autonomous Forklifts (AFL), and Automated Storage & Retrieval Systems (ASRS).
💰 Funding
- Raised ~$81 million across multiple rounds, including a Series C of JPY 6.4 billion in 2022
- Backed by leading global investors such as Goldman Sachs, SBI Investment, and Cyberdyne
✨What Sets Us Apart
- Cloud-First Robotics: Offloading heavy computation to the cloud allows simple, cost-effective robots to work smarter in fleets
- Scalable Architecture: Real-time dashboards, task allocation, path planning, and fleet health monitoring at scale
- Proven Deployments: Robotics solutions already in use with major logistics players in Japan and expanding globally
- Innovation Focus: Strong R&D and patents in multi-robot coordination, plus contributions to open-source robotics frameworks
🚀 Culture & Why Join
- Global, diverse teams: Colleagues from over 20 countries, bringing together expertise across robotics, AI, and cloud
- Impact from day one: Engineers own full features and see their work live in real robots used at scale
- Values-driven: A culture of openness, learning, and pushing boundaries between hardware, software, and cloud technologies
👉 Join us to shape the future of robotics and redefine how warehouses operate worldwide.
Jobs
2
About the company
Jobs
19
About the company
Jobs
1
About the company
Joining the team behind the world’s most trusted artifact firewall isn’t just a job - it’s a mission.
🧩 What the Company Does
This company provides software tools to help development teams manage open-source code securely and efficiently. Its platform covers artifact management, automated policy enforcement, vulnerability detection, software bill of materials (SBOM) management, and AI-powered risk analysis. It's used globally by thousands of enterprises and millions of developers to secure their software supply chains.
👥 Founding Team
The company was founded in the late 2000s by a group of open-source contributors, including one who was heavily involved in building a popular Java-based build automation tool. The company was started by veteran engineers with deep roots in the open-source community—one of whom helped create a widely adopted build automation tool used by millions today.
💰 Funding & Financials
Over the years, the company has raised nearly $150 million across several funding rounds, including a large growth round led by a top-tier private equity firm. It crossed $100 million in annual recurring revenue around 2021 and has remained profitable since. Backers include well-known names in venture capital and private equity.
🏆 Key Milestones & Achievements
- Early on, the company took over stewardship of a widely used public code repository.
- It launched tools for artifact repository management and later expanded into automated security and compliance.
- Has blocked hundreds of thousands of malicious open-source packages and helped companies catch risky components before deployment.
- Released AI-powered tools that go beyond CVE databases to detect deeper threats.
- Recognized as a market leader in software composition analysis by major industry analysts.
- Today, it’s used by many Fortune 100 companies across industries like finance, government, and healthcare.
Jobs
2
About the company
Jobs
6
About the company
Jobs
1




