

Codemind Staffing Solutions
https://codemind.inAbout
Company social profiles
Jobs at Codemind Staffing Solutions

Key Responsibilities
Architect and implement enterprise-grade Lakehouse solutions using Databricks
Design and deliver scalable batch and real-time data pipelines using Apache Spark (PySpark/SQL)
Build ETL/ELT pipelines, incremental data loads, and metadata-driven ingestion frameworks
Implement and optimize Databricks components: Delta Lake, Delta Live Tables, Autoloader, Structured Streaming, and Workflows
Design large-scale data warehousing solutions with 3NF and dimensional modeling
Establish data governance, security, and data quality frameworks, including Unity Catalog
Lead ML lifecycle management using MLflow and drive AI use cases (RAG, AI/BI)
Manage cloud-native deployments on Microsoft Azure and integrate with enterprise systems (e.g., ServiceNow)
Drive CI/CD, DevOps practices, and performance optimization of Spark workloads
Provide technical leadership, mentor teams, and ensure successful delivery
Collaborate with stakeholders to translate business requirements into scalable solutions
Required Skills & Experience
10+ years in Data Engineering / Analytics / AI with strong delivery ownership
Deep expertise in Databricks ecosystem (Notebooks, Delta Lake, Workflows, AI/BI, Apps, Genie)
Strong hands-on experience with:
Apache Spark (performance tuning & scalability)
Python and SQL
Proven experience in:
Solution architecture and large-scale data platforms
Data warehousing and advanced data modeling
Batch and real-time processing systems
Experience with:
Azure Databricks and Azure data services
ML flow and MLOps practices
ServiceNow or enterprise integrations
Exposure to AI technologies (RAG, LLM-based solutions)
Strong stakeholder management and leadership skills
Certifications (Preferred)
Databricks certifications aligned to data engineering and AI tracks, such as:
Databricks Certified Data Engineer Associate (validates foundational ETL, Spark, and Lakehouse capabilities)
Databricks Certified Data Engineer Professional (advanced expertise in pipeline design, optimization, and governance)
Certifications in Databricks Machine Learning or Generative AI tracks (e.g., ML Associate / Professional) for AI-driven use cases
Relevant cloud certifications in Microsoft Azure or Amazon Web Services for platform deployment and architecture
We are looking for a highly skilled AI Engineer / AI Architect to design, develop, and deploy scalable AI solutions. The ideal candidate will have strong expertise in building the Enterprise Generative AI solution along with the ability to architect end-to-end AI systems aligned with business objectives.
Key Responsibilities
Design and implement end-to-end AI/Gen AI solutions.
Architect scalable and secure AI systems using cloud platforms.
Work on Generative AI use cases including LLMs, RAG, Workflow, Agentic AI, prompt
engineering, LLM-as-Judge, and fine-tuning.
Collaborate with cross-functional teams and customer to translate business requirements into AI solutions.
Should have knowledge and worked in SDLC process and understand DevOps
practices/workflows.
Stay updated with emerging AI trends and evaluate new tools/technologies.
Mentor junior engineers and guide best practices in AI development.
Required Skills
Strong proficiency in Python and AI/ML Tools like Google ADK/Langgraph/Langchain, etc.., Should have experience in configuring the LLM in AWS Bedrock/Azure Foundry/Databricks and
configure the LLMOPS and production grade AI System.
Hands-on experience with Generative AI (LLMs, embedding, prompt engineering).
Understands the Agile process and involve in technical scoping and estimations
Strong problem-solving and analytical skills.
Experience working on production-grade AI systems.
Nice to Have
Certifications in Gen AI/AI/ML in Azure/Databricks platforms.
Should have some foundational knowledge with Azure Cloud
Experience in handling an enterprise custom and enterprise AI solutions is a added advantage.

Job Title: RDK-B/ RDK-V Developer
Experience: 3-8 Years
Work Location: Chennai / Bangalore
Work Type: Work from Office
Job Summary
We are looking for experienced RDK-B/ RDK-V Developers to design, develop, and maintain embedded software for Set-Top Box (STB) platforms using C and Linux. The role involves working with
multimedia streaming, device drivers, system integration, and ensuring compliance with video standards. The ideal candidate should have strong hands-on experience in RDK platforms, Linux
system programming, and debugging tools.
Key Responsibilities
Design and develop embedded software for STB platforms using C and Linux.
Work with RDK-B / RDK-V (Reference Design Kit for Video) and GStreamer for multimedia streaming.
Develop and integrate device drivers using Yocto build system.
Collaborate with SoC vendors to resolve hardware/software integration issues.
Implement and optimize video streaming protocols and subsystems (e.g., V4L2, HDMI, Bluetooth).
Perform debugging using GDB and other Linux tools.
Ensure compliance with video standards such as MPEG, ATSC, and DVR.
Participate in Agile development processes and contribute to sprint planning and reviews.
Document software architecture, design decisions, and test results.
Required Skills & Qualifications
3 to 8 years of hands-on RDK-B/ RDK-V development experience.
Strong in C/C++, Linux system programming, IPC, memory and process management.
Good understanding of TR-181, WebPA, TR-069 basics, and networking fundamentals (DHCP, DNS, NAT, Firewall, IPv4/IPv6).
Experience with Git, Gerrit, Jenkins, debugging tools (GDB, Valgrind, strace).
Good understanding of networking layers (L1-L3).
Preferred Skills
Experience in multimedia frameworks and streaming protocols.
Knowledge of Yocto, build systems, and cross-compilation.
Familiarity with Agile tools and practices.
Education
• Bachelor's degree in Computer Science, Electronics, or related field.
Additional Requirements
Strong analytical, problem-solving, and communication skills.

Experience Range
Experience: 8 to 12 years in Wi‑Fi development, embedded Linux, and networking domains.
• Technical lead/Specialist - 8–12 years
Role Summary
We are seeking an experienced Wi‑Fi Developer with strong proficiency in C/C++ on Linux and deep expertise in modern Wi‑Fi standards and networking protocols. The role involves working across the Wi‑Fi stack—drivers, kernel subsystems, user‑space components, and embedded platforms such as OpenWrt and RDK‑B—to build and optimize high‑performance wireless solutions.
Key Responsibilities
• Develop and optimize Wi‑Fi features across 802.11a/b/g/n/ac/ax/be/bn, including Wi‑Fi 6/6E/7/8, MLO, OFDMA, WPA2/WPA3, Beamforming, Multi‑AP coordination, NPCA and MU‑MIMO.
• Work with core Wi‑Fi components such as hostapd, wpa_supplicant, nl80211, cfg80211, mac80211, netlink & wireless extensions
• Middleware development for EasyMesh R1 to R6/ 1905.1 / 1905.1a
• Debug and resolve Wi‑Fi performance and connectivity issues using tcpdump, Wireshark, Omnipeek, iw, logs, and chipset tools.
• Utilize tools like Spirent, Ixia, Chariot, Veriwave for traffic generation and analysis.
• Build and integrate Wi‑Fi features on OpenWrt or RDK‑B, including packaging, configuration, and system integration.
• Support networking stack integration including TCP/IP, DHCP/DNS, NAT, QoS, VLAN, PPPoE, and L2/L3 protocols.
• Develop automation scripts/tools using bash and Python; contribute to CI and test automation workflows.
• Work closely with platform, QA, and hardware teams to deliver robust, scalable Wi‑Fi functionality.
Required Skills & Qualifications
• Strong proficiency in C/C++ development on Linux / Embedded Linux.
• Deep understanding of Wi‑Fi standards: 802.11a/b/g/n/ac/ax/be/bn, Wi‑Fi 6/6E, Wi‑Fi 7, Wi‑Fi 8, MLO, OFDMA, MU‑MIMO, beamforming, WPA2/WPA3.
• Strong middleware development experience for EasyMesh R1 to R6/ 1905.1 / 1905.1a
• Hands‑on experience with hostapd, wpa_supplicant, cfg80211 / nl80211 / mac80211.
• Strong debugging skills using tcpdump, Wireshark, iperf3, iw, and system logs.
• Experience with RDK-B, Prpl, or OpenWrt including build systems, packaging, and configuration frameworks.
• Knowledge of networking: TCP/IP, routing, DHCP/DNS, NAT, firewall, QoS, VLAN, PPPoE.
• Understanding of L2/L3 protocols: VLANs, EtherChannel (LAG/LACP), STP/RSTP/MSTP.
• Proficiency with Git, bash, and Python scripting.
• Strong analytical and problem‑solving skills.
Preferred / Nice‑to‑Have
• Experience with WiFi 8 Feature development and Integration
• Experience with chipset SDKs (Qualcomm, Broadcom, MediaTek, Intel, Realtek).
• Knowledge of TR‑069, TR‑181, TR‑369 (USP).
• Familiarity with CI/CD pipelines and automated test frameworks.
• Experience with Wi‑Fi performance tuning and field deployment optimizatio

Key responsibilities
• Design, build, and maintain robust CI/CD pipelines using Azure DevOps Services (Azure Pipelines) and Git-based workflows.
• Implement and manage infrastructure as code (IaC) using ARM templates, Bicep, and/or Terraform for repeatable environment provisioning.
• Containerize applications (Docker) and manage container orchestration platforms such as AKS (Azure Kubernetes Service).
• Automate build, test, release, and rollback processes; integrate automated testing and quality gates into pipelines.
• Monitor and improve platform reliability and observability using logging and monitoring tools (e.g., Azure Monitor, Application Insights, Prometheus, Grafana).
• Drive platform security and compliance through pipeline controls, secrets management (Key Vault / Vault), and secure configuration practices.
• Implement cost-optimization and governance for Azure resources (tags, policies, budgets).
• Troubleshoot build/release failures, production incidents, and performance bottlenecks; perform root-cause analysis and implement permanent fixes.
• Mentor developers in Git workflows, pipeline authoring, best practices for IaC, and cloud-native design.
• Maintain clear documentation: runbooks, deployment playbooks, architecture diagrams, and pipeline templates.
Required skills & experience
• 4+ years hands-on experience working with Azure and cloud-native application delivery.
• Deep experience with Azure DevOps (Repos, Pipelines, Artifacts, Boards).
• Strong IaC skills with Terraform, ARM templates, or Bicep.
• Solid experience with CI/CD design and YAML pipeline authoring.
• Practical knowledge of containerization (Docker) and Kubernetes — preferably AKS.
• Scripting skills: PowerShell, Bash, and/or Python for automation.
• Experience with Git workflows (branching strategies, PRs, code reviews).
• Familiarity with configuration management and secrets management (Azure Key Vault, HashiCorp Vault).
• Understanding of networking, identity (Azure AD), and security fundamentals in Azure.
• Strong troubleshooting, debugging, and incident response skills.
• Good collaboration and communication skills; ability to work across teams.
Certification
AZ-400: Microsoft Certified: DevOps Engineer Expert or AZ-104 or AZ 305 or Terraform Associate.
Key responsibilities
• Design, build, and maintain robust CI/CD pipelines using Azure DevOps Services (Azure Pipelines) and Git-based workflows.
• Implement and manage infrastructure as code (IaC) using ARM templates, Bicep, and/or Terraform for repeatable environment provisioning.
• Containerize applications (Docker) and manage container orchestration platforms such as AKS (Azure Kubernetes Service).
• Automate build, test, release, and rollback processes; integrate automated testing and quality gates into pipelines.
• Monitor and improve platform reliability and observability using logging and monitoring tools (e.g., Azure Monitor, Application Insights, Prometheus, Grafana).
• Drive platform security and compliance through pipeline controls, secrets management (Key Vault / Vault), and secure configuration practices.
• Implement cost-optimization and governance for Azure resources (tags, policies, budgets).
• Troubleshoot build/release failures, production incidents, and performance bottlenecks; perform root-cause analysis and implement permanent fixes.
• Mentor developers in Git workflows, pipeline authoring, best practices for IaC, and cloud-native design.
• Maintain clear documentation: runbooks, deployment playbooks, architecture diagrams, and pipeline templates.
Required skills & experience
• 5+ years hands-on experience working with Azure and cloud-native application delivery.
• Deep experience with Azure DevOps (Repos, Pipelines, Artifacts, Boards).
• Strong IaC skills with Terraform, ARM templates, or Bicep.
• Solid experience with CI/CD design and YAML pipeline authoring.
• Practical knowledge of containerization (Docker) and Kubernetes — preferably AKS.
• Scripting skills: PowerShell, Bash, and/or Python for automation.
• Experience with Git workflows (branching strategies, PRs, code reviews).
• Familiarity with configuration management and secrets management (Azure Key Vault, HashiCorp Vault).
• Understanding of networking, identity (Azure AD), and security fundamentals in Azure.
• Strong troubleshooting, debugging, and incident response skills.
• Good collaboration and communication skills; ability to work across teams.
Certification
AZ-400: Microsoft Certified: DevOps Engineer Expert or AZ-104 or AZ 305 or Terraform Associate.
We are looking for a highly skilled AI Engineer / AI Architect to design, develop, and deploy scalable AI solutions. The ideal candidate will have strong expertise in building the Enterprise Generative AI solution along with the ability to architect end-to-end AI systems aligned with business objectives.
Key Responsibilities
Design and implement end-to-end AI/Gen AI solutions.
Architect scalable and secure AI systems using cloud platforms.
Work on Generative AI use cases including LLMs, RAG, Workflow, Agentic AI, prompt
engineering, LLM-as-Judge, and fine-tuning.
Collaborate with cross-functional teams and customer to translate business requirements into AI solutions.
Should have knowledge and worked in SDLC process and understand DevOps
practices/workflows.
Stay updated with emerging AI trends and evaluate new tools/technologies.
Mentor junior engineers and guide best practices in AI development.
Required Skills
Strong proficiency in Python and AI/ML Tools like Google ADK/Langgraph/Langchain, etc.., Should have experience in configuring the LLM in AWS Bedrock/Azure Foundry/Databricks and
configure the LLMOPS and production grade AI System.
Hands-on experience with Generative AI (LLMs, embedding, prompt engineering).
Understands the Agile process and involve in technical scoping and estimations
Strong problem-solving and analytical skills.
Experience working on production-grade AI systems.
Nice to Have
Certifications in Gen AI/AI/ML in Azure/Databricks platforms.
Should have some foundational knowledge with Azure Cloud
Experience in handling an enterprise custom and enterprise AI solutions is a added advantage.
Key Responsibilities
Architect and implement enterprise-grade Lakehouse solutions using Databricks
Design and deliver scalable batch and real-time data pipelines using Apache Spark (PySpark/SQL)
Build ETL/ELT pipelines, incremental data loads, and metadata-driven ingestion frameworks
Implement and optimize Databricks components: Delta Lake, Delta Live Tables, Autoloader, Structured Streaming, and Workflows
Design large-scale data warehousing solutions with 3NF and dimensional modeling
Establish data governance, security, and data quality frameworks, including Unity Catalog
Lead ML lifecycle management using MLflow and drive AI use cases (RAG, AI/BI)
Manage cloud-native deployments on Microsoft Azure and integrate with enterprise systems (e.g., ServiceNow)
Drive CI/CD, DevOps practices, and performance optimization of Spark workloads
Provide technical leadership, mentor teams, and ensure successful delivery
Collaborate with stakeholders to translate business requirements into scalable solutions
Required Skills & Experience
10+ years in Data Engineering / Analytics / AI with strong delivery ownership
Deep expertise in Databricks ecosystem (Notebooks, Delta Lake, Workflows, AI/BI, Apps, Genie)
Strong hands-on experience with:
a. Apache Spark (performance tuning & scalability)
b. Python and SQL
Proven experience in:
a. Solution architecture and large-scale data platforms
b. Data warehousing and advanced data modeling
c. Batch and real-time processing systems
Experience with:
a. Azure Databricks and Azure data services
b. MLflow and MLOps practices
c. ServiceNow or enterprise integrations
Exposure to AI technologies (RAG, LLM-based solutions)
Strong stakeholder management and leadership skills
Certifications (Preferred)
Databricks certifications aligned to data engineering and AI tracks, such as:
a. Databricks Certified Data Engineer Associate (validates foundational ETL, Spark, and Lakehouse capabilities)
b. Databricks Certified Data Engineer Professional (advanced expertise in pipeline design, optimization, and governance)
Certifications in Databricks Machine Learning or Generative AI tracks (e.g., ML Associate / Professional) for AI-driven use cases
Relevant cloud certifications in Microsoft Azure or Amazon Web Services for platform deployment and architecture
Role Summary
We are looking for a highly skilled AI Engineer / AI Architect to design, develop, and deploy scalable AI solutions. The ideal candidate will have strong expertise in building the Enterprise Generative AIsolution along with the ability to architect end-to-end AI systems aligned with business objectives.
Key Responsibilities
Design and implement end-to-end AI/Gen AI solutions. Architect scalable and secure AI systems using cloud platforms.
Work on Generative AI use cases including LLMs, RAG, Workflow, Agentic AI, prompt
engineering, LLM-as-Judge, and fine-tuning.
Collaborate with cross-functional teams and customer to translate business requirements into AI solutions.
Should have knowledge and worked in SDLC process and understand DevOps
practices/workflows.
Stay updated with emerging AI trends and evaluate new tools/technologies.
Mentor junior engineers and guide best practices in AI development.
Required Skills
Strong proficiency in Python and AI/ML Tools like Google ADK/Langgraph/Langchain, etc.., Should have experience in configuring the LLM in AWS Bedrock/Azure Foundry/Databricks and
configure the LLMOPS and production grade AI System.
Hands-on experience with Generative AI (LLMs, embedding, prompt engineering).
Understands the Agile process and involve in technical scoping and estimations
Strong problem-solving and analytical skills. Experience working on production-grade AI systems.
Nice to Have
Certifications in Gen AI/AI/ML in Azure/Databricks platforms.
Should have some foundational knowledge with Azure Cloud
Experience in handling an enterprise custom and enterprise AI solutions is a added advantage.
Position: Dot Net Developer
Job Profile: We are seeking a highly motivated and experienced .NET Core Developer to join our team. The ideal candidate will have a strong background in designing, developing, and maintaining web applications and APIs using .NET Core. As part of our dynamic team, you will work on various projects that involve creating scalable and highperforming applications for our clients.
Responsibilities:
Design, develop, and maintain RESTful APIs using .NET Core and related technologies.
Develop, test, and deploy web applications that meet functional and non-functional
business requirements.
Collaborate with front-end developers to integrate user-facing elements with server-side logic.
Write clean, scalable, and efficient code following best practices in software
development.
Participate in code reviews, design discussions, and contribute to continuous
improvement processes.
Troubleshoot and debug applications and APIs to optimize performance.
Work with databases such as SQL Server or NoSQL databases.
Implement security and data protection measures.
Collaborate with cross-functional teams, including QA, DevOps, and Project Managers, to ensure successful project delivery.
Maintain proper documentation of code, processes, and functionality.
Skills:
3+ years of hands-on experience with .NET Core in API and web application
development.
Proficiency in C# and ASP.NET Core for building server-side applications.
Strong understanding of RESTful API design, development, and best practices.
Experience with front-end technologies such as HTML5, CSS3, and JavaScript
frameworks (e.g. React, Angular or Vue.js).
Solid understanding of Entity Framework Core or other ORM tools.
Experience with SQL Server or other relational databases.
Familiarity with cloud platforms like Azure or AWS is a plus.
Knowledge of microservices architecture and containerization technologies (e.g.,
Docker, Kubernetes) is preferred. • Familiarity with version control systems like Git.
Experience with CI/CD pipelines and automated testing frameworks is a plus.
Strong problem-solving and analytical skills.
Excellent communication and teamwork abilities.
Bachelor’s degree in computer science, Information Technology, or a related field (or equivalent work experience).
Similar companies
About the company
NonStop io Technologies Pvt. Ltd.est. in 2015 is a software product development company.We invest in our client’s vision, build the technology and make sure the end-product is in alignment with their end business goals over short and the long term.
Jobs
16
About the company
We’re a UI/UX design company, super-powering businesses by crafting simple & delightful digital experiences.
We are designers, artists, creators, researchers, visualizers and observers; well a bunch of driven individuals with creative minds, working together as User Interface and User Experience Designers!
At Monsoonfish, we believe in working in an environment that suits each teammate, makes them feel comfortable and encourages them to become a better version of themselves at work and beyond. Our agency culture is open, liberal, accepting, outgoing, driven, focused, and the one that values work-life balance.
Jobs
9
About the company
Oddr is the legal industry’s only AI-powered invoice-to-cash platform. Oddr’s AI-powered platform centralizes, streamlines and accelerates every step of billing + collections— from bill preparation and delivery to collections and reconciliation - enabling new possibilities in analytics, forecasting, and client service that eliminate revenue leakage and increase profitability in the billing and collections lifecycle.
www.oddr.com
Jobs
8
About the company
Techjays is The AI Reimagination Company — an enterprise AI partner founded by leaders with experience at Google. We don’t just experiment with AI; we build, deploy, and scale production-grade systems that solve real business problems.
Our focus is on industries where impact matters most — manufacturing, logistics, and complex enterprise operations. From intelligent automation to LLM-powered workflows, we design solutions that deliver measurable business outcomes in under 90 days.
With 20+ live AI systems already in production and over $100M in cost savings delivered, our work goes beyond proof of concept — it drives tangible value.
Headquartered in Menlo Park, Techjays operates across seven countries including the USA, India, UAE, UK, Canada, Australia, and Bangladesh — helping global enterprises rethink how they build, operate, and scale with AI at the core.
Jobs
6
About the company
Vizup Commerce provides Video solutions for Shopify Brands. Vizup Commerce enables D2C brands to apply gamified techniques with interactive videos to educate customers, build trust and increase sales. It has 150+ brands as its customer base including Neemans, Soulflower, FormulaZ, etc.,
Jobs
2
About the company
Bits In Glass (BIG) is an award-winning software consulting firm that helps organizations improve operations and drive better customer experiences. They specialize in business process automation consulting, helping clients unlock the potential of their people, processes, and data.
Jobs
2
About the company
Jobs
3
About the company
Jobs
1
About the company
Jobs
2






