50+ Python Jobs in Hyderabad | Python Job openings in Hyderabad
Apply to 50+ Python Jobs in Hyderabad on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.


Python Full Stack Trainer - Join Our Friendly Team!
We're looking for a Python Full Stack Trainer to join our fun and supportive educational family. You'll be teaching students how to become awesome Python full stack developers using our already designed curriculum. Even if you're not a perfect match for all requirements but love teaching and know your Python - we'd still love to hear from you!
What You'll Do
- Teach our well-designed Python full stack curriculum in an engaging way
- Make learning fun with hands-on coding sessions and real-world examples
- Support students at different skill levels with patience and enthusiasm
- Provide helpful feedback to help students grow their skills
- Share your passion for Python and web development
- Be part of our friendly, collaborative teaching team
Skills That Would Be Great
- Knowledge of Python and web frameworks like Django or Flask
- Experience with front-end basics (HTML, CSS, JavaScript)
- Familiarity with databases and how they connect to applications
- Some experience with Git and cloud services
- A knack for explaining tech concepts in simple, understandable ways
- Enthusiasm for teaching and helping others learn
Nice to Have (But Not Required!)
- Previous coding or development experience
- Some teaching or mentoring background
- A technical degree or relevant certifications
The Awesome Perks
- State-of-the-art high-tech training facility with the latest equipment
- Ready-to-use curriculum (no need to create materials from scratch!)
- Friendly, supportive work environment where everyone helps each other
- Competitive pay and benefits
- Flexible schedule options
- Be part of something meaningful - helping others launch their tech careers!
Come As You Are
We believe in potential and passion. If you're excited about teaching Python and helping others, we want to talk - even if your background isn't traditional. Our community is welcoming and supportive!
Join us and help others discover the fun of Python full stack development!

Data Scientist
Job Title: Data Scientist – Data and Artificial Intelligence
Location: Hyderabad
Job Type: Full-time
Company Description:
Qylis is a leading provider of innovative IT solutions, specializing in Cloud, Data & AI,
and Cyber Security. We help businesses unlock the full potential of these technologies
to achieve their goals and gain a competitive edge. Our unique approach focuses on
delivering value through bespoke solutions tailored to customer specific needs. We are
driven by a customer-centric mindset and committed to delivering continuous value
through intellectual property accelerators and automation. Our team of experts is
passionate about technology and dedicated to making a positive impact. We foster an
environment of growth and innovation, constantly pushing the boundaries to deliver
exceptional results. Website: www.qylis.com, LinkedIn:
www.linkedin.com/company/qylis
Job Summary
We are an engineering organization collaborating directly with clients to address
challenges using the latest technologies. Our focus is on joint code development with
clients' engineers for cloud-based solutions, accelerating organizational progress.
Working with product teams, partners, and open-source communities, we contribute to
open source, striving for platform improvement. This role involves creating impactful
solution patterns and open-source assets. As a team member, you'll collaborate with
engineers from both teams, applying your skills and creativity to solve complex
challenges and contribute to open source, while fostering professional growth.
Responsibilities
• Researching and developing production-grade models (forecasting, anomaly
detection, optimization, clustering, etc.) for global cloud business by using
statistical and machine learning techniques.
• Manage large volumes of data, and create new and improved solutions for data
collection, management, analyses, and data science model development.
• Drive the onboarding of new data and the refinement of existing data sources
through feature engineering and feature selection.
• Apply statistical concepts and cutting-edge machine learning techniques to
analyze cloud demand and optimize data science model code for distributed
computing platforms and task automation.
• Work closely with other data scientists and data engineers to deploy models that
drive cloud infrastructure capacity planning.
• Present analytical findings and business insights to project managers,
stakeholders, and senior leadership and keep abreast of new statistical /
machine learning techniques and implement them as appropriate to improve
predictive performance.
• Oversees the analysis of data and leads the team in identifying trends, patterns,
correlations, and insights to develop new forecasting models and improve
existing models.
• Leads collaboration among team and leverages data to identify pockets of
opportunity to apply state-of-the-art algorithms to improve a solution to a
business problem.
• Consistently leverages knowledge of techniques to optimize analysis using
algorithms.
• Modifies statistical analysis tools for evaluating Machine Learning models.
Solves deep and challenging problems for circumstances such as when model
predictions are not correct, when models do not match the training data or the
design outcomes when the data is not clean when it is unclear which analyses to
run, and when the process is ambiguous.
• Provides coaching to team members on business context, interpretation, and the
implications of findings. Interprets findings and their implications for multiple
businesses, and champions methodological rigour by calling attention to the
limitations of knowledge wherever biases in data, methods, and analysis exist.
• Generates and leverages insights that inform future studies and reframe the
research agenda. Informs both current business decisions by implementing and
adapting supply-chain strategies through complex business intelligence.
Qualifications
• M.Sc. in Statistics, Applied Mathematics, Applied Economics, Computer
Science or Engineering, Data Science, Operations Research or similar applied
quantitative field
• 7+ years of industry experience in developing production-grade statistical and
machine learning code in a collaborative team environment.
• Prior experience in machine learning using R or Python (scikit / numpy / pandas /
statsmodel).
• Prior experience working on Computer Vision Project is an Add on
• Knowledge on AWS and Azure Cloud.
• Prior experience in time series forecasting.
• Prior experience with typical data management systems and tools such as SQL.
• Knowledge and ability to work within a large-scale computing or big data context,
and hands-on experience with Hadoop, Spark, DataBricks or similar.
• Excellent analytical skills; ability to understand business needs and translate
them into technical solutions, including analysis specifications and models.
• Experience in machine learning using R or Python (scikit / numpy / pandas /
statsmodel) with skill level at or near fluency.
• Experience with deep learning models (e.g., tensorflow, PyTorch, CNTK) and solid
knowledge of theory and practice.
• Practical and professional experience contributing to and maintaining a large
code base with code versioning systems such as Git.
• Creative thinking skills with emphasis on developing innovative methods to solve
hard problems under ambiguity.
• Good interpersonal and communication (verbal and written) skills, including the
ability to write concise and accurate technical documentation and communicate
technical ideas to non-technical audiences.

What You Need:
✅ Strong experience in backend development using Python (Django, Flask, or FastAPI).
✅ Hands-on experience with Azure Cloud services (Azure Functions, App Services, AKS, CosmosDB, etc.).
✅ Experience leading a development team and managing projects.
✅ Expertise in designing and managing APIs, microservices, and event-driven architectures.
✅ Strong database experience with MongoDB, PostgreSQL, MySQL, or CosmosDB.
✅ Knowledge of DevOps practices, including CI/CD pipelines, Docker, and Kubernetes.
✅ Ability to develop Proof of Concepts (POCs) and evaluate new technology solutions.
✅ Strong problem-solving and debugging skills.
We’re looking for an experienced SQL Developer with 3+ years of hands-on experience to join our growing team. In this role, you’ll be responsible for designing, developing, and maintaining SQL queries, procedures, and data systems that support our business operations and decision-making processes. You should be passionate about data, highly analytical, and capable of working both independently and collaboratively with cross-functional teams.
Key Responsibilities:
Design, develop, and maintain complex SQL queries, stored procedures, functions, and views.
Optimize existing queries for performance and efficiency.
Collaborate with data analysts, developers, and stakeholders to understand requirements and translate them into robust SQL solutions.
Design and implement ETL processes to move and transform data between systems.
Perform data validation, troubleshooting, and quality checks.
Maintain and improve existing databases, ensuring data integrity, security, and accessibility.
Document code, processes, and data models to support scalability and maintainability.
Monitor database performance and provide recommendations for improvement.
Work with BI tools and support dashboard/report development as needed.
Requirements:
3+ years of proven experience as an SQL Developer or in a similar role.
Strong knowledge of SQL and relational database systems (e.g., MS SQL Server, PostgreSQL, MySQL, Oracle).
Experience with performance tuning and optimization.
Proficiency in writing complex queries and working with large datasets.
Experience with ETL tools and data pipeline creation.
Familiarity with data warehousing concepts and BI reporting.
Solid understanding of database security, backup, and recovery.
Excellent problem-solving skills and attention to detail.
Good communication skills and ability to work in a team environment.
Nice to Have:
Experience with cloud-based databases (AWS RDS, Google BigQuery, Azure SQL).
Knowledge of Python, Power BI, or other scripting/analytics tools.
Experience working in Agile or Scrum environments.

Assignment Details
Our client, a global leader in energy management and automation, is seeking a skilled and experienced Test Automation Engineer with strong expertise in developing automation frameworks for Windows and Web applications. The ideal candidate will have hands-on experience with Python and Robot Framework, and a solid background in software development, debugging, and unit testing. This role requires the ability to work independently, contribute to the entire testing lifecycle, and collaborate with cross-functional teams in an Agile environment.
Key Responsibilities:
- Design and develop robust Test Automation Frameworks for both Windows and Web applications.
- Implement automated test cases using Python and Robot Framework.
- Collaborate with development teams to understand feature requirements and break them down into actionable tasks.
- Use version control and issue tracking tools like TFS/ADO, GitHub, Jira, SVN, etc.
- Perform code reviews, unit testing, and debugging of automation scripts.
- Clearly document and report test results, defects, and automation progress.
- Maintain and enhance existing test automation suites to support continuous delivery pipelines.
Skills Required
- 5–10 years of professional experience in Test Automation and Software Development.
- Strong proficiency in Python and Robot Framework.
- Solid experience with Windows and Web application testing.
- Familiarity with version control systems such as TFS, GitHub, SVN and project tracking tools like Jira.
- Strong analytical and problem-solving skills.
- Ability to work independently with minimal supervision.
- Excellent written and verbal communication skills for documentation and reporting.

Role description:
You will be building curated enterprise grade solutions for GenAI application deployment at a production scale for clients. Solid understanding and hands on skills for GenAI application deployment that includes development and engineering skills. The role requires development and engineering skills on GenAI application development including data ingestion, choosing the right fit LLMs, simple and advanced RAG, guardrails, prompt engineering for optimisation, traceability, security, LLM evaluation, observability, and deployment at scale on cloud or on premise. As this space evolves very rapidly, candidates must also demonstrate knowledge on agentic AI frameworks. Candidates having strong background on ML with engineering skills is highly preferred for LLMOps role.
Required skills:
- 4-8 years of experience in working on ML projects that includes business requirement gathering, model development, training, deployment at scale and monitoring model performance for production use cases
- Strong knowledge on Python, NLP, Data Engineering, Langchain, Langtrace, Langfuse, RAGAS, AgentOps (optional)
- Should have worked on proprietary and open source large language models
- Experience on LLM fine tuning, creating distilled model from hosted LLMs
- Building data pipelines for model training
- Experience on model performance tuning, RAG, guardrails, prompt engineering, evaluation and observability
- Experience in GenAI application deployment on cloud and on-premise at scale for production
- Experience in creating CI/CD pipelines
- Working knowledge on Kubernetes
- Experience in minimum one cloud: AWS / GCP / Azure to deploy AI services
- Experience in creating workable prototypes using Agentic AI frameworks like CrewAI, Taskweaver, AutoGen
- Experience in light weight UI development using streamlit or chainlit (optional)
- Desired experience on open-source tools for ML development, deployment, observability and integration
- Background on DevOps and MLOps will be a plus
- Experience working on collaborative code versioning tools like GitHub/GitLab
- Team player with good communication and presentation skills

Role Summary:
AuxoAI is seeking a skilled and experienced Data Engineer to join our dynamic team. The ideal candidate will have 2+ years of prior experience in data engineering, with a strong background in AWS (Amazon Web Services) technologies. This role offers an exciting opportunity to work on diverse projects, collaborating with cross-functional teams to design, build, and optimize data pipelines and infrastructure.
Responsibilities:
· Design, develop, and maintain scalable data pipelines and ETL processes leveraging AWS services such as S3, Glue, EMR, Lambda, and Redshift.
· Collaborate with data scientists and analysts to understand data requirements and implement solutions that support analytics and machine learning initiatives.
· Optimize data storage and retrieval mechanisms to ensure performance, reliability, and cost-effectiveness.
· Implement data governance and security best practices to ensure compliance and data integrity.
· Troubleshoot and debug data pipeline issues, providing timely resolution and proactive monitoring.
· Stay abreast of emerging technologies and industry trends, recommending innovative solutions to enhance data engineering capabilities.
Qualifications:
· Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
· 2+ years of prior experience in data engineering, with a focus on designing and building data pipelines.
· Proficiency in AWS services, particularly S3, Glue, EMR, Lambda, and Redshift.
· Strong programming skills in languages such as Python, Java, or Scala.
· Experience with SQL and NoSQL databases, data warehousing concepts, and big data technologies.
· Familiarity with containerization technologies (e.g., Docker, Kubernetes) and orchestration tools (e.g., Apache Airflow) is a plus.

Work Mode: Hybrid
Need B.Tech, BE, M.Tech, ME candidates - Mandatory
Must-Have Skills:
● Educational Qualification :- B.Tech, BE, M.Tech, ME in any field.
● Minimum of 3 years of proven experience as a Data Engineer.
● Strong proficiency in Python programming language and SQL.
● Experience in DataBricks and setting up and managing data pipelines, data warehouses/lakes.
● Good comprehension and critical thinking skills.
● Kindly note Salary bracket will vary according to the exp. of the candidate -
- Experience from 4 yrs to 6 yrs - Salary upto 22 LPA
- Experience from 5 yrs to 8 yrs - Salary upto 30 LPA
- Experience more than 8 yrs - Salary upto 40 LPA

We are looking for a skilled and passionate Data Engineers with a strong foundation in Python programming and hands-on experience working with APIs, AWS cloud, and modern development practices. The ideal candidate will have a keen interest in building scalable backend systems and working with big data tools like PySpark.
Key Responsibilities:
- Write clean, scalable, and efficient Python code.
- Work with Python frameworks such as PySpark for data processing.
- Design, develop, update, and maintain APIs (RESTful).
- Deploy and manage code using GitHub CI/CD pipelines.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Work on AWS cloud services for application deployment and infrastructure.
- Basic database design and interaction with MySQL or DynamoDB.
- Debugging and troubleshooting application issues and performance bottlenecks.
Required Skills & Qualifications:
- 4+ years of hands-on experience with Python development.
- Proficient in Python basics with a strong problem-solving approach.
- Experience with AWS Cloud services (EC2, Lambda, S3, etc.).
- Good understanding of API development and integration.
- Knowledge of GitHub and CI/CD workflows.
- Experience in working with PySpark or similar big data frameworks.
- Basic knowledge of MySQL or DynamoDB.
- Excellent communication skills and a team-oriented mindset.
Nice to Have:
- Experience in containerization (Docker/Kubernetes).
- Familiarity with Agile/Scrum methodologies.
As an RPA (Robotic Process Automation) Lead, you will drive the strategic implementation of automation solutions, lead a team in designing and deploying robotic workflows, and collaborate with stakeholders to optimize business processes, ensuring efficiency and innovation.
We are looking for you!
You are a team player, get-it-done person, intellectually curious, customer focused, self-motivated, responsible individual who can work under pressure with a positive attitude. You have the zeal to think differently, understand that career is a journey and make the choices right. Ideal candidates would be someone who is creative, proactive, go getter and motivated to look for ways to add value to job accomplishments.
You are self-motivated with a strong work ethic, positive attitude, and demeanor, enthusiastic when embracing new challenges, able to multitask and prioritize (good time management skills), willingness to learn new technology/methodologies, adaptable and flexible when new products are assigned. You prefer to work independently with less or no supervision. You are process oriented, have a methodical approach and demonstrate quality first approach and preferably who have worked in a result-oriented team(s).
What you’ll do
- Work in customer facing roles, understanding business requirements, Process Assessment.
- Conducts architectural evaluation, design and analysis of automation deployments.
- Handson experience in Bot Design, Bot Development, Testing and Debugging
- Responsible to prepare and review of technical documentation (Solution Design Document).
- Driving best practice design - identifying reusable components, queues, configurable parameters.
- Experience with customer interaction and software development lifecycle, as well as Agile project management methodology.
- Researching, recommending and implementing new processes and technology to improve the quality of services provided.
- Partnering with the Pre-sales team to estimate efforts and craft solutions.
What you will Bring
- Bachelor's degree in Computer Science, or any related field.
- 8 to 12 years of experience with hands-on experience in RPA development and deployment.
- Certifications with RPA platforms preferably on (UiPath or Power Automate).
- Hands-on experience working in development or support projects.
- Experience working in Agile SCRUM environment.
- Strong communication, organizational, analytical and problem-solving skills.
- Ability to succeed in a collaborative and fast paced environment.
- Ensures the delivery of a high-quality solutions meeting client s expectations.
- Ability to Lead Teams with Developers and Junior Developers.
- Programming languages: knowledge on at least one of these – C#, Visual Basic, Python, .Net, Java
Why join us?
- Work with a passionate and innovative team in a fast-paced, growth-oriented environment.
- Gain hands-on experience in content marketing with exposure to real-world projects.
- Opportunity to learn from experienced professionals and enhance your marketing skills.
- Contribute to exciting initiatives and make an impact from day one.
- Competitive stipend and potential for growth within the company.
- Recognized for excellence in data and AI solutions with industry awards and accolades.

Job Description:
As a Machine Learning Engineer, you will:
- Operationalize AI models for production, ensuring they are scalable, robust, and efficient.
- Work closely with data scientists to optimize machine learning model performance.
- Utilize Docker and Kubernetes for the deployment and management of AI models in a production environment.
- Collaborate with cross-functional teams to integrate AI models into products and services.
Responsibilities:
- Develop and deploy scalable machine learning models into production environments.
- Optimize models for performance and scalability.
- Implement continuous integration and deployment (CI/CD) pipelines for machine learning projects.
- Monitor and maintain model performance in production.
Key Performance Indicators (KPI) For Role:
- Success in deploying scalable and efficient AI models into production.
- Improvement in model performance and scalability post-deployment.
- Efficiency in model deployment and maintenance processes.
- Positive feedback from team members and stakeholders on AI model integration and performance.
- Adherence to best practices in machine learning engineering and deployment.
Prior Experience Required:
- 2-4 years of experience in machine learning or data science, with a focus on deploying machine learning models into production.
- Proficient in Python and familiar with data science libraries and frameworks (e.g., TensorFlow, PyTorch).
- Experience with Docker and Kubernetes for containerization and orchestration of machine learning models.
- Demonstrated ability to optimize machine learning models for performance and scalability.
- Familiarity with machine learning lifecycle management tools and practices.
- Experience in developing and maintaining scalable and robust AI systems.
- Knowledge of best practices in AI model testing, versioning, and deployment.
- Strong understanding of data preprocessing, feature engineering, and model evaluation metrics.
Employer:
RaptorX.ai
Location:
Hyderabad
Collaboration:
The role requires collaboration with data engineers, software developers, and product managers to ensure the seamless integration of AI models into products and services.
Salary:
Competitive, based on experience.
Education:
- Bachelor's degree in Computer Science, Information Technology, or a related field.
Language Skills:
- Strong command of Business English, both verbal and written, is required.
Other Skills Required:
- Strong analytical and problem-solving skills.
- Proficiency in code versioning tools, such as Git.
- Ability to work in a fast-paced and evolving environment.
- Excellent teamwork and communication skills.
- Familiarity with agile development methodologies.
- Understanding of cloud computing services (AWS, Azure, GCP) and their use in deploying machine learning models is a plus.
Other Requirements:
- Proven track record of successfully deploying machine learning models into production.
- Ability to manage multiple projects simultaneously and meet deadlines.
- A portfolio showcasing successful AI/ML projects.
Founders and Leadership
RaptorX is led by seasoned founders with deep expertise in security, AI, and enterprise solutions. Our leadership team has held senior positions at global tech giants like Microsoft, Palo Alto Networks, Akamai, and Zscaler, solving critical problems at scale.
We bring not just technical excellence, but also a relentless passion for innovation and impact.
The Market Opportunity
Fraud costs the global economy trillions of dollars annually, and traditional fraud detection methods simply can't keep up. The demand for intelligent, adaptive solutions like RaptorX is massive and growing exponentially across industries like:
- Fintech and Banking
- E-commerce
- Payments
This is your chance to work on a product that addresses a multi-billion-dollar market with huge growth potential.
The Tech Space at RaptorX
We are solving large-scale, real-world problems using modern technologies, offering specialized growth paths for every tech role.
Why You Should Join Us
- Opportunity to Grow: As an early-stage startup, every contribution you make will have a direct impact on the company’s growth and success. You’ll wear multiple hats, learn fast, and grow exponentially.
- Innovate Every Day: Solve complex, unsolved problems using the latest in AI, Graph Databases, and advanced analytics.
- Collaborate with the Best: Work alongside some of the brightest minds in the industry. Learn from leaders who have built and scaled successful products globally.
- Make an Impact: Help businesses reduce losses, secure customers, and prevent fraud globally. Your work will create a tangible difference.

What We’re Looking For
Proven experience as a Machine Learning Engineer, Data Scientist, or similar role
Expertise in applying machine learning algorithms, deep learning, and data mining techniques in an enterprise environment
Strong proficiency in Python (or other languages) and familiarity with libraries such as Scikit-learn, TensorFlow, PyTorch, or similar.
Experience working with natural language processing (NLP) or computer vision is highly desirable.
Understanding and experience with (MLOps), including model development, deployment, monitoring, and maintenance.
Experience with cloud platforms (like AWS, Google Cloud, or Azure) and knowledge of deploying machine learning models at scale.
Familiarity with data architecture, data engineering, and data pipeline tools.
Familiarity with containerization technologies such as Docker, and orchestration systems like Kubernetes.
Knowledge of the insurance sector is beneficial but not required.
Bachelor's/Master's degree in Computer Science, Data Science, Mathematics, or a related field.
What You’ll Be Doing
Algorithm Development:
Design and implement advanced machine learning algorithms tailored for our datasets.
Model Creation:
Build, train, and refine machine learning models for business integration.
Collaboration:
Partner with product managers, developers, and data scientists to align machine learning solutions with business goals.
Industry Innovation:
Stay updated with Insurtech trends and ensure our solutions remain at the forefront.
Validation:
Test algorithms for accuracy and efficiency, collaborating with the QA team.
Documentation:
Maintain clear records of algorithms and models for team reference.
Professional Growth:
Engage in continuous learning and mentor junior team members.

We are looking for a Senior Data Engineer with strong expertise in GCP, Databricks, and Airflow to design and implement a GCP Cloud Native Data Processing Framework. The ideal candidate will work on building scalable data pipelines and help migrate existing workloads to a modern framework.
- Shift: 2 PM 11 PM
- Work Mode: Hybrid (3 days a week) across Xebia locations
- Notice Period: Immediate joiners or those with a notice period of up to 30 days
Key Responsibilities:
- Design and implement a GCP Native Data Processing Framework leveraging Spark and GCP Cloud Services.
- Develop and maintain data pipelines using Databricks and Airflow for transforming Raw → Silver → Gold data layers.
- Ensure data integrity, consistency, and availability across all systems.
- Collaborate with data engineers, analysts, and stakeholders to optimize performance.
- Document standards and best practices for data engineering workflows.
Required Experience:
- 7-8 years of experience in data engineering, architecture, and pipeline development.
- Strong knowledge of GCP, Databricks, PySpark, and BigQuery.
- Experience with Orchestration tools like Airflow, Dagster, or GCP equivalents.
- Understanding of Data Lake table formats (Delta, Iceberg, etc.).
- Proficiency in Python for scripting and automation.
- Strong problem-solving skills and collaborative mindset.
⚠️ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response!
Best regards,
Vijay S
Assistant Manager - TAG




JD:
As a Staff Software Engineer, you will be responsible for the design, implementation and maintenance of software modules owned by a team. You will guide and mentor junior team members by reviewing their code and designs to ensure that they are writing well designed and high quality code. This role requires excellent communication skills as you will need to collaborate across teams.
Key Skills:
- 3 to 6 years of experience
- Extensive experience on C/C++/Python
- Experience on OS - Unix/Linux and Windows
- Hands on experience in Networking and Multi-threading
Desirable Skills:
- Experience in Cloud Technologies, system programming is a plus
Roles and responsibilities:
- Play key role in design/development of security product.
- Responsible for the complete software development cycle.
- Gathering & understanding requirements.
- Design, development & implementation independently.
- Responsible for ensuring timely and quality delivery.
- Will be able to work in end to end features.
Qualification:
- B.Tech / B.E / M.E./ M.Tech (Computer Science) or equivalent.


JioTesseract, a digital arm of Reliance Industries, is India's leading and largest AR/VR organization with the mission to democratize mixed reality for India and the world. We make products at the cross of hardware, software, content and services with focus on making India the leader in spatial computing. We specialize in creating solutions in AR, VR and AI, with some of our notable products such as JioGlass, JioDive, 360 Streaming, Metaverse, AR/VR headsets for consumers and enterprise space.
Mon-fri role, In office, with excellent perks and benefits!
Position Overview
We are seeking a Software Architect to lead the design and development of high-performance robotics and AI software stacks utilizing NVIDIA technologies. This role will focus on defining scalable, modular, and efficient architectures for robot perception, planning, simulation, and embedded AI applications. You will collaborate with cross-functional teams to build next-generation autonomous systems 9
Key Responsibilities:
1. System Architecture & Design
● Define scalable software architectures for robotics perception, navigation, and AI-driven decision-making.
● Design modular and reusable frameworks that leverage NVIDIA’s Jetson, Isaac ROS, Omniverse, and CUDA ecosystems.
● Establish best practices for real-time computing, GPU acceleration, and edge AI inference.
2. Perception & AI Integration
● Architect sensor fusion pipelines using LIDAR, cameras, IMUs, and radar with DeepStream, TensorRT, and ROS2.
● Optimize computer vision, SLAM, and deep learning models for edge deployment on Jetson Orin and Xavier.
● Ensure efficient GPU-accelerated AI inference for real-time robotics applications.
3. Embedded & Real-Time Systems
● Design high-performance embedded software stacks for real-time robotic control and autonomy.
● Utilize NVIDIA CUDA, cuDNN, and TensorRT to accelerate AI model execution on Jetson platforms.
● Develop robust middleware frameworks to support real-time robotics applications in ROS2 and Isaac SDK.
4. Robotics Simulation & Digital Twins
● Define architectures for robotic simulation environments using NVIDIA Isaac Sim & Omniverse.
● Leverage synthetic data generation (Omniverse Replicator) for training AI models.
● Optimize sim-to-real transfer learning for AI-driven robotic behaviors.
5. Navigation & Motion Planning
● Architect GPU-accelerated motion planning and SLAM pipelines for autonomous robots.
● Optimize path planning, localization, and multi-agent coordination using Isaac ROS Navigation.
● Implement reinforcement learning-based policies using Isaac Gym.
6. Performance Optimization & Scalability
● Ensure low-latency AI inference and real-time execution of robotics applications.
● Optimize CUDA kernels and parallel processing pipelines for NVIDIA hardware.
● Develop benchmarking and profiling tools to measure software performance on edge AI devices.
Required Qualifications:
● Master’s or Ph.D. in Computer Science, Robotics, AI, or Embedded Systems.
● Extensive experience (7+ years) in software development, with at least 3-5 years focused on architecture and system design, especially for robotics or embedded systems.
● Expertise in CUDA, TensorRT, DeepStream, PyTorch, TensorFlow, and ROS2.
● Experience in NVIDIA Jetson platforms, Isaac SDK, and GPU-accelerated AI.
● Proficiency in programming languages such as C++, Python, or similar, with deep understanding of low-level and high-level design principles.
● Strong background in robotic perception, planning, and real-time control.
● Experience with cloud-edge AI deployment and scalable architectures.
Preferred Qualifications
● Hands-on experience with NVIDIA DRIVE, NVIDIA Omniverse, and Isaac Gym
● Knowledge of robot kinematics, control systems, and reinforcement learning
● Expertise in distributed computing, containerization (Docker), and cloud robotics
● Familiarity with automotive, industrial automation, or warehouse robotics
● Experience designing architectures for autonomous systems or multi-robot systems.
● Familiarity with cloud-based solutions, edge computing, or distributed computing for robotics
● Experience with microservices or service-oriented architecture (SOA)
● Knowledge of machine learning and AI integration within robotic systems
● Knowledge of testing on edge devices with HIL and simulations (Isaac Sim, Gazebo, V-REP etc.)
JioTesseract, a digital arm of Reliance Industries, is India's leading and largest AR/VR organization with the mission to democratize mixed reality for India and the world. We make products at the cross of hardware, software, content and services with focus on making India the leader in spatial computing. We specialize in creating solutions in AR, VR and AI, with some of our notable products such as JioGlass, JioDive, 360 Streaming, Metaverse, AR/VR headsets for consumers and enterprise space.
Mon-Fri, In office role with excellent perks and benefits!
Key Responsibilities:
1. Design, develop, and maintain backend services and APIs using Node.js or Python, or Java.
2. Build and implement scalable and robust microservices and integrate API gateways.
3. Develop and optimize NoSQL database structures and queries (e.g., MongoDB, DynamoDB).
4. Implement real-time data pipelines using Kafka.
5. Collaborate with front-end developers to ensure seamless integration of backend services.
6. Write clean, reusable, and efficient code following best practices, including design patterns.
7. Troubleshoot, debug, and enhance existing systems for improved performance.
Mandatory Skills:
1. Proficiency in at least one backend technology: Node.js or Python, or Java.
2. Strong experience in:
i. Microservices architecture,
ii. API gateways,
iii. NoSQL databases (e.g., MongoDB, DynamoDB),
iv. Kafka
v. Data structures (e.g., arrays, linked lists, trees).
3. Frameworks:
i. If Java : Spring framework for backend development.
ii. If Python: FastAPI/Django frameworks for AI applications.
iii. If Node: Express.js for Node.js development.
Good to Have Skills:
1. Experience with Kubernetes for container orchestration.
2. Familiarity with in-memory databases like Redis or Memcached.
3. Frontend skills: Basic knowledge of HTML, CSS, JavaScript, or frameworks like React.js.

About the Role:
We are seeking an experienced and driven Lead Backend Engineer to oversee and elevate our backend architecture. This role will focus deeply on backend systems, collaborating closely with the founder and core team to turn strategic goals into reality through backend excellence. The ideal candidate will combine strong technical expertise with leadership capabilities, driving backend development while ensuring system security, scalability, and performance.
Key Responsibilities:
- Backend Development Leadership
- Ownership of Backend Systems: Lead the backend development process, aligning it with the company's broader goals. Gain a full understanding of the existing backend infrastructure, especially in the initial phase.
- Roadmap Development: Within the first three months, build a detailed roadmap that addresses backend "must-do" tasks (e.g., major bugs, security vulnerabilities, data leakage prevention) alongside longer-term improvements. Continuously update the roadmap based on strategic directions from board meetings.
2. Strategic Planning and Execution
- Backend Strategy Implementation: Translate high-level strategies into backend tasks, ensuring clarity on how each piece fits into the company's larger goals.
- Sprint and Task Management: Lead the backend sprint planning process, break down tasks into manageable components, and ensure accurate estimations for efficient execution.
3. Team Leadership and Development
- Mentoring and Growth: Lead backend developers, nurturing their growth while ensuring a culture of responsibility and continuous improvement.
- Process Optimization: Regularly assess backend processes, identifying areas to streamline development and ensure adherence to best practices.
4. Security and Quality Assurance
- Security Oversight: Ensure the backend systems are fortified against potential threats, setting the highest standards for security in every aspect of development.
- Quality Assurance: Maintain top-tier backend development standards, ensuring the system remains resilient, scalable, and efficient under load.
5. Innovation and Continuous Learning
- Real-time Strategy Input: Offer insights during strategic discussions on backend challenges, providing quick, effective solutions when needed.
- Automation and Efficiency: Implement backend automation practices, from CI/CD pipelines to other efficiency-boosting tools that improve the backend workflow.
6. Research and Communication
- Technology Exploration: Stay ahead of backend trends and technologies, providing research and recommendations to stakeholders. Break down complex backend issues into understandable, actionable points.
7. Workplace Expectations
- Ownership Mentality: Embody a strong sense of ownership over the backend systems, with a proactive attitude that eliminates the need for close follow-up.
- On-site Work: Work from the office is required to foster close collaboration with the team.
Tech Stack & Skills
Must-Have:
- Programming Languages: Node.js & JavaScript (TypeScript or normal)
- Databases: Firestore, MongoDB, NoSQL
- Cloud Platforms: Google Cloud Platform (GCP), AWS
- Microservices: Google Cloud Functions
- Containerization: Docker (creation, hosting, maintenance, etc.)
- Deployment & Orchestration: Google Cloud Run
- Messaging & Task Management: Pub/Sub, Google Cloud Tasks
- Security: GCP/AWS Security (IAMs)
Good-to-Have:
- Programming Languages: Python
Qualifications:
- Proven experience as a Lead Backend Engineer or similar role, focusing on backend systems.
- Expertise in the backend technologies specified.
- Strong understanding of CI/CD pipelines and backend security best practices.
- Excellent problem-solving skills and an ability to think critically about backend challenges.
- Strong leadership qualities with the ability to mentor and manage backend developers.
- A passion for continuous learning and applying new backend technologies.
- A high degree of ownership over backend systems, with the ability to work independently.


Level of skills and experience:
5 years of hands-on experience in using Python, Spark,Sql.
Experienced in AWS Cloud usage and management.
Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow).
Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch.
Experience with orchestrators such as Airflow and Kubeflow.
Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes).
Fundamental understanding of Parquet, Delta Lake and other data file formats.
Proficiency on an IaC tool such as Terraform, CDK or CloudFormation.
Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst



We're looking for AI/ML enthusiasts who build, not just study. If you've implemented transformers from scratch, fine-tuned LLMs, or created innovative ML solutions, we want to see your work!
Make Sure before Applying (GitHub Profile Required):
1. Your GitHub must include:
- At least one substantial ML/DL project with documented results
- Code demonstrating PyTorch/TensorFlow implementation skills
- Clear documentation and experiment tracking
- Bonus: Contributions to ML open-source projects
2. Pin your best projects that showcase:
- LLM fine-tuning and evaluation
- Data preprocessing pipelines
- Model training and optimization
- Practical applications of AI/ML
Technical Requirements:
- Solid understanding of deep learning fundamentals
- Python + PyTorch/TensorFlow expertise
- Experience with Hugging Face transformers
- Hands-on with large dataset processing
- NLP/Computer Vision project experience
Education:
- Completed/Pursuing Bachelor's in Computer Science or related field
- Strong foundation in ML theory and practice
Apply if:
- You have done projects using GenAI, Machine Learning, Deep Learning.
- You must have strong Python coding experience.
- Someone who is available immediately to start with us in the office(Hyderabad).
- Someone who has the hunger to learn something new always and aims to step up at a high pace.
We value quality implementations and thorough documentation over quantity. Show us how you think through problems and implement solutions!
Responsibilities
- Design and implement advanced solutions utilizing Large Language Models (LLMs).
- Demonstrate self-driven initiative by taking ownership and creating end-to-end solutions.
- Conduct research and stay informed about the latest developments in generative AI and LLMs.
- Develop and maintain code libraries, tools, and frameworks to support generative AI development.
- Participate in code reviews and contribute to maintaining high code quality standards.
- Engage in the entire software development lifecycle, from design and testing to deployment and maintenance.
- Collaborate closely with cross-functional teams to align messaging, contribute to roadmaps, and integrate software into different repositories for core system compatibility.
- Possess strong analytical and problem-solving skills.
- Demonstrate excellent communication skills and the ability to work effectively in a team environment.
Primary Skills
- Generative AI: Proficiency with SaaS LLMs, including Lang chain, llama index, vector databases, Prompt engineering (COT, TOT, ReAct, agents). Experience with Azure OpenAI, Google Vertex AI, AWS Bedrock for text/audio/image/video modalities.
- Familiarity with Open-source LLMs, including tools like TensorFlow/Pytorch and Huggingface. Techniques such as quantization, LLM finetuning using PEFT, RLHF, data annotation workflow, and GPU utilization.
- Cloud: Hands-on experience with cloud platforms such as Azure, AWS, and GCP. Cloud certification is preferred.
- Application Development: Proficiency in Python, Docker, FastAPI/Django/Flask, and Git.
- Natural Language Processing (NLP): Hands-on experience in use case classification, topic modeling, Q&A and chatbots, search, Document AI, summarization, and content generation.
- Computer Vision and Audio: Hands-on experience in image classification, object detection, segmentation, image generation, audio, and video analysis.



What You’ll Be Doing
- 🛠 Write code for web and mobile apps, fix bugs, and work on features that people will actually use.
- 💡 Join brainstorming sessions and help shape our products.
- 🚀 Things move fast here, and you’ll learn as you go.
- 🤝 Work closely with everyone—designers, developers, and even marketing folks.
- 🔧 Diving into Our Tech Stack: React, React Native Node, Express, Python, FastAPI, and PostgreSQL.
What We’re Looking For
We’re not looking for perfection, but if you’re curious, motivated, and excited to learn, you’ll fit right in!
For Backend Engineers
- 💻 Strong knowledge of Python, FastAPI, and PostgreSQL.
- 🔍 Solid understanding of Low-Level Design (LLD) and High-Level Design (HLD).
- ⚡ Ability to optimize APIs, manage databases efficiently, and handle real-world scaling challenges.
For Frontend Engineers
- 💻 Expertise in React Native.
- 🎯 Knowledge of native Kotlin (Android) and Swift (iOS) is a big bonus.
- 🚀 Comfortable with state management, performance optimization, and handling platform-specific quirks.
General Expectations for All Engineers
- 🛠 While you’ll be specialized in either frontend or backend, you should be good enough to fix bugs in both.
- 🔍 You enjoy figuring things out and experimenting until you get it right.
- 🤝 Great communication skills and a collaborative mindset.
- 🚀 You’re ready to dive in and make things happen.
Interview Process
If we like your application, Be ready to:
- Solve a data structures and algorithms (DSA) problem in your preferred programming language.
- Answer questions about your specialized area (frontend/backend) to showcase your depth of knowledge.
- Discuss a real-world problem and how you’d debug & fix an issue in both frontend and backend
Why Join Us?
- 💡 Your work will matter here—no busy work, just real projects with real outcomes.
- 🚀 Help shape the future of our company.
- 🎉 We’re all about solving cool problems and having fun while we do it.

We are seeking a highly skilled and experienced Offshore Data Engineer . The role involves designing, implementing, and testing data pipelines and products.
Qualifications & Experience:
bachelor's or master's degree in computer science, Information Systems, or a related field.
5+ years of experience in data engineering, with expertise in data architecture and pipeline development.
☁️ Proven experience with GCP, Big Query, Databricks, Airflow, Spark, DBT, and GCP Services.
️ Hands-on experience with ETL processes, SQL, PostgreSQL, MySQL, MongoDB, Cassandra.
Strong proficiency in Python and data modelling.
Experience in testing and validation of data pipelines.
Preferred: Experience with eCommerce systems, data visualization tools (Tableau, Looker), and cloud certifications.
If you meet the above criteria and are interested, please share your updated CV along with the following details:
Total Experience:
Current CTC:
Expected CTC:
Current Location:
Preferred Location:
Notice Period / Last Working Day (if serving notice):
⚠️ Kindly share your details only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response!
• Required Qualifications
Bachelor’s degree or equivalent in Computer Science, Engineering, or related field; or equivalent work experience.
4-10 years of proven experience in Data Engineering
At least 4+ years of experience on AWS Cloud
Strong understanding in data warehousing principals and data modeling
Expert with SQL including knowledge of advanced query optimization techniques - build queries and data visualizations to support business use cases/analytics.
Proven experience on the AWS environment including access governance, infrastructure changes and implementation of CI/CD processes to support automated development and deployment
Proven experience with software tools including Pyspark and Python, PowerBI, QuickSite and core AWS tools such as Lambda, RDS, Cloudwatch, Cloudtrail, SNS, SQS, etc.
Experience building services/APIs on AWS Cloud environment.
Data ingestion and curation as well as implementation of data pipelines.
• Preferred Qualifications
Experience in Informatica/ETL technology will be a plus.
Experience with AI/ML Ops – model build through implementation lifecycle in AWS Cloud environment.
Hands-on experience on Snowflake would be good to have.
Experience in DevOps and microservices would be preferred.
Experience in Financial industry a plus.
We are seeking a Senior Data Scientist with hands-on experience in Generative AI (GenAI) and Large Language Models (LLM). The ideal candidate will have expertise in building, fine-tuning, and deploying LLMs, as well as managing the lifecycle of AI models through LLMOps practices. You will play a key role in driving AI innovation, developing advanced algorithms, and optimizing model performance for various business applications.
Key Responsibilities:
- Develop, fine-tune, and deploy Large Language Models (LLM) for various business use cases.
- Implement and manage the operationalization of LLMs using LLMOps best practices.
- Collaborate with cross-functional teams to integrate AI models into production environments.
- Optimize and troubleshoot model performance to ensure high accuracy and scalability.
- Stay updated with the latest advancements in Generative AI and LLM technologies.
Required Skills and Qualifications:
- Strong hands-on experience with Generative AI, LLMs, and NLP techniques.
- Proven expertise in LLMOps, including model deployment, monitoring, and maintenance.
- Proficiency in programming languages like Python and frameworks such as TensorFlow, PyTorch, or Hugging Face.
- Solid understanding of AI/ML algorithms and model optimization.
Job Summary :
We are seeking an experienced Automation Tester to join our team. The successful candidate will be responsible for designing, developing, and maintaining automated tests for our software applications using Python. The ideal candidate will have a strong understanding of software testing principles, including test-driven development, behavior-driven development, and automated testing frameworks.
Key Responsibilities :
1. Test Automation : Design, develop, and maintain automated tests for software applications using Python and various testing frameworks such as Pytest, Unittest, and Behave.
2. Test Script Development : Develop test scripts to automate testing of software applications, including web, mobile, and desktop applications.
3. Test Framework Development : Develop and maintain test frameworks to support automated testing, including data-driven testing, keyword-driven testing, and hybrid testing.
4. Test Environment Setup : Set up and maintain test environments, including virtual machines, containers, and cloud-based infrastructure.
5. Test Data Management : Manage test data, including data creation, data masking, and data validation.
6. Defect Reporting : Identify, report, and track defects found during automated testing, including defect tracking and defect management.
7. Collaboration : Collaborate with development teams, QA teams, and product owners to ensure automated tests are aligned with business requirements and application functionality.
8. Test Automation Framework Maintenance : Maintain and update test automation frameworks to ensure they remain relevant and effective.
Requirements :
1. Python Programming : Strong proficiency in Python programming, including Python 3.x, NumPy, and Pandas.
2. Test Automation Frameworks : Experience with test automation frameworks such as Pytest, Unittest, Behave, and Selenium.
3. Automated Testing : 5+ years of experience with automated testing, including test-driven development, behavior-driven development, and automated testing frameworks.
4. Software Testing : 5+ years of experience with software testing, including manual testing, automated testing, and test planning.
5. Agile Methodologies : Experience with Agile development methodologies, including Scrum and Kanban.
6. Version Control : Experience with version control systems such as Git, SVN, and Mercurial.
7. Cloud Experience : Experience with cloud-based infrastructure, including AWS, GCP, and Azure.
Dear Candidate,
We are Urgently hiring QA Automation Engineers and Test leads At Hyderabad and Bangalore
Exp: 6-10 yrs
Locations: Hyderabad ,Bangalore
JD:
we are Hiring Automation Testers with 6-10 years of Automation testing experience using QA automation tools like Java, UFT, Selenium, API Testing, ETL & others
Must Haves:
· Experience in Financial Domain is a must
· Extensive Hands-on experience in Design, implement and maintain automation framework using Java, UFT, ETL, Selenium tools and automation concepts.
· Experience with AWS concept and framework design/ testing.
· Experience in Data Analysis, Data Validation, Data Cleansing, Data Verification and identifying data mismatch.
· Experience with Databricks, Python, Spark, Hive, Airflow, etc.
· Experience in validating and analyzing kubernetics log files.
· API testing experience
· Backend testing skills with ability to write SQL queries in Databricks and in Oracle databases
· Experience in working with globally distributed Agile project teams
· Ability to work in a fast-paced, globally structured and team-based environment, as well as independently
· Experience in test management tools like Jira
· Good written and verbal communication skills
Good To have:
- Business and finance knowledge desirable
Best Regards,
Minakshi Soni
Executive - Talent Acquisition (L2)
Worldwide Locations: USA | HK | IN
6+ years of experience with deployment and management of Kubernetes clusters in production environment as DevOps engineer.
• Expertise in Kubernetes fundamentals like nodes, pods, services, deployments etc., and their interactions with the underlying infrastructure.
• Hands-on experience with containerization technologies such as Docker or RKT to package applications for use in a distributed system managed by Kubernetes.
• Knowledge of software development cycle including coding best practices such as CI/CD pipelines and version control systems for managing code changes within a team environment.
• Must have Deep understanding on different aspects related to Cloud Computing and operations processes needed when setting up workloads on top these platforms
• Experience with Agile software development and knowledge of best practices for agile Scrum team.
• Proficient with GIT version control
• Experience working with Linux and cloud compute platforms.
• Excellent problem-solving skills and ability to troubleshoot complex issues in distributed systems.
• Excellent communication & interpersonal skills, effective problem-solving skills and logical thinking ability and strong commitment to professional and client service excellence.


Responsibilities
- Develop and maintain robust APIs to support various applications and services.
- Design and implement scalable solutions using AWS cloud services.
- Utilize Python frameworks such as Flask and Django to build efficient and high-performance applications.
- Collaborate with cross-functional teams to gather and analyze requirements for new features and enhancements.
- Ensure the security and integrity of applications by implementing best practices and security measures.
- Optimize application performance and troubleshoot issues to ensure smooth operation.
- Provide technical guidance and mentorship to junior team members.
- Conduct code reviews to ensure adherence to coding standards and best practices.
- Participate in agile development processes including sprint planning daily stand-ups and retrospectives.
- Develop and maintain documentation for code processes and procedures.
- Stay updated with the latest industry trends and technologies to continuously improve skills and knowledge.
- Contribute to the overall success of the company by delivering high-quality software solutions that meet business needs.
- Foster a collaborative and inclusive work environment that promotes innovation and continuous improvement.
Qualifications
- Possess strong expertise in developing and maintaining APIs.
- Demonstrate proficiency in AWS cloud services and their application in scalable solutions.
- Have extensive experience with Python frameworks such as Flask and Django.
- Exhibit strong analytical and problem-solving skills to address complex technical challenges.
- Show ability to collaborate effectively with cross-functional teams and stakeholders.
- Display excellent communication skills to convey technical concepts clearly.
- Have a background in the Consumer Lending domain is a plus.
- Demonstrate commitment to continuous learning and staying updated with industry trends.
- Possess a strong understanding of agile development methodologies.
- Show experience in mentoring and guiding junior team members.
- Exhibit attention to detail and a commitment to delivering high-quality software solutions.
- Demonstrate ability to work effectively in a hybrid work model.
- Show a proactive approach to identifying and addressing potential issues before they become problems.


Location: Hyderabad, Hybrid
We seek an experienced Lead Data Scientist to support our growing team of talented engineers in the Insurtech space. As a proven leader, you will guide, mentor, and drive the team to build, deploy, and scale machine learning models that deliver impactful business solutions. This role is crucial for ensuring the team provides real-world, scalable results. In this position, you will play a key role, not only by developing models but also by leading cross-functional teams, structuring and understanding data, and providing actionable insights to the business.
What We’re Looking For
● Proven experience as a Lead Machine Learning Engineer, Data Scientist, or a similar role, with extensive leadership experience managing and mentoring engineering teams.
● Bachelor’s or Master’s degree in Computer Science, Data Science, Mathematics, or a related field.
● Demonstrated success in leading teams that have designed, deployed, and scaled machine learning models in production environments, delivering tangible business results.
● Expertise in machine learning algorithms, deep learning, and data mining techniques, ideally in an enterprise setting.
● Hands-on experience with natural language processing (NLP) and data extraction tools.
● Proficiency in Python (or other programming languages), and with libraries such as Scikit-learn, TensorFlow, PyTorch, etc.
● Strong experience with MLOps processes, including model development, deployment, and monitoring at scale.
● Strong leadership and analytical skills, capable of structuring and interpreting data to provide meaningful business insights and guide decisions.
● A practical understanding of aligning machine learning initiatives with business objectives.
● Industry experience in insurance is a plus but not required.
What You’ll Be Doing
● Lead and mentor a team of strong Machine Learning Engineers, synthesizing their skills and ideas into scalable, impactful solutions.
● Have a proven track record in successfully leading teams in the development, deployment, and scaling of machine learning models to solve real-world business problems.
● Provide leadership in structuring, analyzing, and interpreting data, delivering insights to the business side to support informed decision-making.
● Oversee the full lifecycle of machine learning models – from development and training to deployment and monitoring in production environments.
● Collaborate closely with cross-functional teams to align AI efforts with business goals
and ensure measurable impacts.
● Leverage deep expertise in machine learning, data science, or related fields to enhance document interpretation tasks such as text extraction, classification, and semantic analysis.
● Utilize natural language processing (NLP) to improve our AI-based Intelligent Document Processing platform.
● Implement MLOps best practices, ensuring smooth deployment, monitoring, and maintenance of models in production.
● Continuously explore and experiment with new technologies and techniques to push the boundaries of AI solutions within the organization.
● Lead by example, fostering a culture of innovation and collaboration within the team while effectively communicating technical insights to both technical and non-technical stakeholders.
Candidates not located in Hyderabad will not be taken into consideration.


Job Summary:
Job Description Combine your passion for technology, healthcare, and communications as an HCS Web/React UI Software Engineer. You will develop and improve our software workflow control systems through the full software development cycle from design to maintenance, while improving hospital communication and performance. This role will work on re-engineering existing applications into the SAAS model using state of the art technologies.
Primarily responsible for technical leadership, support, and guidance to local engineering teams, ensuring that projects are completed safely, efficiently, and within budget as well as overseeing the planned maintenance programs of facilities.
Key Responsibilities:
• Oversee and coordinate all facilities management and project activities, including maintenance, renovation, and remodeling projects.
• Lead and manage facility-related projects, such as renovations, expansions, or relocations. Develop project plans, allocate resources, set timelines, and monitor progress to ensure successful completion within budget and schedule.
• Oversee relationships with external vendors, contractors, and service providers. Negotiate contracts, monitor performance, and ensure compliance with agreed-upon terms and service level agreements.
• Identify and implement best practices in facilities management. Ensure facilities comply with all relevant codes, regulations, and health and safety standards. Stay updated on changes in regulations and implement necessary changes to maintain compliance.
• Oversee facilities planned maintenance programs.
• Monitor project progress, identify risks, and implement mitigation strategies.
• Lead and oversee engineering projects within the Dubai & Northern Emirates region.
• Prepare financial reports and forecasts for management review.
• Identify and mitigate project risks and issues.
• Ensure compliance with environmental, health, and safety regulations.
• Generate progress reports, status updates, and other project-related documentation as required.
• Ensure that all project deliverables are of the highest quality, and that they meet the requirements of the project scope.
Qualifications:
• Bachelor's Degree in engineering or related fields.
• Engineering qualifications in Electrical or HVAC disciplines will be an advantage.
• 5 - 10 years of experience in Engineering in Healthcare setup or a similar field.
• Strong analytical and problem-solving skills.
• Must have hands-on experience in hospital systems.
• Ability to think critically and maintain a high level of confidentiality.
• Must be willing to participate in crisis management at facility level.
• Strong interpersonal, verbal, and written communication skills.
• Familiarity with project management software.


We are seeking an experienced Senior Golang Developer to join our dynamic engineering team at our Hyderabad office (Hybrid option available).
What You'll Do:
- Collaborate with a team of engineers to design, develop, and support web and mobile applications using Golang.
- Work in a fast-paced agile environment, delivering high-quality solutions focused on continuous innovation.
- Tackle complex technical challenges with creativity and out-of-the-box thinking.
- Take ownership of critical components and gradually assume responsibility for significant portions of the product.
- Develop robust, scalable, and performant backend systems using Golang.
- Contribute to all phases of the development lifecycle, including design, coding, testing, and deployment.
- Build and maintain SQL and NoSQL databases to support application functionality.
- Document your work and collaborate effectively with cross-functional teams, including QA, engineering, and business units.
- Work with global teams to architect solutions, provide estimates, reduce complexity, and deliver a world-class platform.
Who Should Apply:
- 5+ years of experience in backend development with a strong focus on Golang.
- Proficient in building and deploying RESTful APIs and microservices.
- Experience with SQL and NoSQL databases (e.g., MySQL, MongoDB).
- Familiarity with cloud platforms such as AWS and strong Linux skills.
- Hands-on experience with containerization and orchestration tools like Docker and Kubernetes.
- Knowledge of system design principles, scalability, and high availability.
- Exposure to frontend technologies like React or mobile development is a plus.
- Experience working in an Agile/Scrum environment.



- Data Scientist with 4+ yrs of experience
- Good working experience in Computer vision and ML engineering
- Strong knowledge of statistical modeling, hypothesis testing, and regression analysis
- Should be developing APIs
- Proficiency in Python, SQL
- Should have Azure knowledge
- Basic knowledge of NLP
- Analytical thinking and problem-solving abilities
- Excellent communication, Strong collaboration skills
- Should be able to work independently
- Attention to detail and commitment to high-quality results
- Adaptability to fast-paced, dynamic environments


Job Description
We are looking for a talented Java Developer to work in abroad countries. You will be responsible for developing high-quality software solutions, working on both server-side components and integrations, and ensuring optimal performance and scalability.
Preferred Qualifications
- Experience with microservices architecture.
- Knowledge of cloud platforms (AWS, Azure).
- Familiarity with Agile/Scrum methodologies.
- Understanding of front-end technologies (HTML, CSS, JavaScript) is a plus.
Requirment Details
Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent experience).
Proven experience as a Java Developer or similar role.
Strong knowledge of Java programming language and its frameworks (Spring, Hibernate).
Experience with relational databases (e.g., MySQL, PostgreSQL) and ORM tools.
Familiarity with RESTful APIs and web services.
Understanding of version control systems (e.g., Git).
Solid understanding of object-oriented programming (OOP) principles.
Strong problem-solving skills and attention to detail.


Key Responsibilities:
● Work closely with product managers, designers, frontend developers, and other
cross-functional teams to ensure the seamless integration and alignment of frontend and
backend technologies, driving cohesive and high-quality product delivery.
● Develop and implement coding standards and best practices for the backend team.
● Document technical specifications and procedures.
● Stay up-to-date with the latest backend technologies, trends, and best practices.
● Collaborate with other departments to identify and address backend-related issues.
● Conduct code reviews and ensure code quality and consistency across the backend team.
● Create technical documentation, ensuring clarity for future development and
maintenance.
Requirements;
● Experience: 4-6 years of hands-on experience in backend development, with a strong
background in product-based companies or startups.
● Education: Bachelor’s degree or above in Computer Science or a related field.
● Programming skills: Proficient in Python and software development principles, with a
focus on clean, maintainable code, and industry best practices. Experienced in unit
testing, AI-driven code reviews, version control with Git, CI/CD pipelines using GitHub
Actions, and integrating New Relic for logging and APM into backend systems.
● Database Development: Proficiency in developing and optimizing backend systems in
both relational and non-relational database environments, such as MySQL and NoSQL
databases.
● GraphQL: Proven experience in developing and managing robust GraphQL APIs,
preferably using Apollo Server. Ability to design type-safe GraphQL schemas and
resolvers, ensuring seamless integration and high performance.
● Cloud Platforms: Familiar with AWS and experienced in Docker containerization and
orchestrating containerized systems.
● System Architecture: Proficient in system design and architecture with experience in
developing multi-tenant platforms, including security implementation, user onboarding,
payment integration, and scalable architecture.
● Linux Systems: Familiarity with Linux systems is mandatory, including deployment and
management.
● Continuous Learning: Stay current with industry trends and emerging technologies to
influence architectural decisions and drive continuous improvement.
Benefits:
● Competitive salary.
● Health insurance.
● Casual dress code.
● Dynamic & Collaboration friendly office.
● Hybrid work schedule.
Industry
- IT Services and IT Consulting
Employment Type
Full-time
The Sr. Analytics Engineer would provide technical expertise in needs identification, data modeling, data movement, and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective.
Understands and leverages best-fit technologies (e.g., traditional star schema structures, cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges.
Provides data understanding and coordinates data-related activities with other data management groups such as master data management, data governance, and metadata management.
Actively participates with other consultants in problem-solving and approach development.
Responsibilities :
Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs.
Perform data analysis to validate data models and to confirm the ability to meet business needs.
Assist with and support setting the data architecture direction, ensuring data architecture deliverables are developed, ensuring compliance to standards and guidelines, implementing the data architecture, and supporting technical developers at a project or business unit level.
Coordinate and consult with the Data Architect, project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels.
Work closely with Business Analysts and Solution Architects to design the data model satisfying the business needs and adhering to Enterprise Architecture.
Coordinate with Data Architects, Program Managers and participate in recurring meetings.
Help and mentor team members to understand the data model and subject areas.
Ensure that the team adheres to best practices and guidelines.
Requirements :
- Strong working knowledge of at least 3 years of Spark, Java/Scala/Pyspark, Kafka, Git, Unix / Linux, and ETL pipeline designing.
- Experience with Spark optimization/tuning/resource allocations
- Excellent understanding of IN memory distributed computing frameworks like Spark and its parameter tuning, writing optimized workflow sequences.
- Experience of relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., Redshift, Bigquery, Cassandra, etc).
- Familiarity with Docker, Kubernetes, Azure Data Lake/Blob storage, AWS S3, Google Cloud storage, etc.
- Have a deep understanding of the various stacks and components of the Big Data ecosystem.
- Hands-on experience with Python is a huge plus


Hi,
I am HR from Janapriya school , Miyapur , Hyderabad , Telangana.
Currently we are looking for a primary computer teacher .
the teacher should have atleast 2 years experience in teaching computers .
Intrested candidates can apply to the above posting.



Mandatory Skills
- C/C++ Programming
- Linux System concepts
- Good Written and verbal communication skills
- Good problem-solving skills
- Python scripting experience
- Prior experience in Continuous Integration and Build System is a plus
- SCM tools like git, perforce etc is a plus
- Repo, Git and Gerrit tools
- Android Build system expertise
- Automation development experience with like Electric Commander, Jenkins, Hudson

The Technical Lead will oversee all aspects of application development at TinyPal. This position involves both managing the development team and actively contributing to the coding and architecture, particularly in the backend development using Python. The ideal candidate will bring a strategic perspective to the development process, ensuring that our solutions are robust, scalable, and aligned with our business goals.
Key Responsibilities:
- Lead and manage the application development team across all areas, including backend, frontend, and mobile app development.
- Hands-on development and oversight of backend systems using Python, ensuring high performance, scalability, and integration with frontend services.
- Architect and design innovative solutions that meet market needs and are aligned with the company’s technology strategy, with a strong focus on embedding AI technologies to enhance app functionalities.
- Coordinate with product managers and other stakeholders to translate business needs into technical strategies, particularly in leveraging AI to solve complex problems and improve user experiences.
- Maintain high standards of software quality by establishing good practices and habits within the development team.
- Evaluate and incorporate new technologies and tools to improve application development processes, with a particular emphasis on AI and machine learning technologies.
- Mentor and support team members to foster a collaborative and productive environment.
- Lead the deployment and continuous integration of applications across various platforms, ensuring AI components are well integrated and perform optimally.
Required Skills and Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field.
- Minimum of 7 years of experience in software development, with at least 1 year in a leadership role.
- Expert proficiency in Python and experience with frameworks like Django or Flask.
- Broad experience in full lifecycle development of large-scale applications.
- Strong architectural understanding of both frontend and backend technologies, with a specific capability in integrating AI into complex systems.
- Experience with cloud platforms (AWS, Azure, Google Cloud), and understanding of DevOps and CI/CD processes.
- Demonstrated ability to think strategically about business, product, and technical challenges, including the adoption and implementation of AI solutions.
- Excellent team management, communication, and interpersonal skills.


Description:
We are looking for a highly motivated Full Stack Backend Software Intern to join our team. The ideal candidate should have a strong interest in AI, LLM (Large Language Models), and related technologies, along with the ability to work independently and complete tasks with minimal supervision.
Responsibilities:
- Research and gather requirements for backend software projects.
- Develop, test, and maintain backend components of web applications.
- Collaborate with front-end developers to integrate user-facing elements with server-side logic.
- Optimize applications for maximum speed and scalability.
- Implement security and data protection measures.
- Stay up-to-date with emerging technologies and industry trends.
- Complete tasks with minimal hand-holding and supervision.
- Assist with frontend tasks using JavaScript and React if required.
Requirements:
- Proficiency in backend development languages such as Python or Node.js
- Familiarity with frontend technologies like HTML, CSS, JavaScript, and React.
- Experience with relational and non-relational databases.
- Understanding of RESTful APIs and microservices architecture.
- Knowledge of AI, LLM, and related technologies is a plus.
- Ability to work independently and complete tasks with minimal supervision.
- Strong problem-solving skills and attention to detail.
- Excellent communication and teamwork skills.
- Currently pursuing or recently completed a degree in Computer Science or related field.
Benefits:
- Opportunity to work on cutting-edge technologies in AI and LLM.
- Hands-on experience in developing backend systems for web applications.
- Mentorship from experienced developers and engineers.
- Flexible working hours and a supportive work environment.
- Possibility of a full-time position based on performance.
If you are passionate about backend development, AI, and LLM, and are eager to learn and grow in a dynamic environment, we would love to hear from you. Apply now to join our team as a Full Stack Backend Software Intern.

We are actively seeking a self-motivated Data Engineer with expertise in Azure cloud and Databricks, with a thorough understanding of Delta Lake and Lake-house Architecture. The ideal candidate should excel in developing scalable data solutions, crafting platform tools, and integrating systems, while demonstrating proficiency in cloud-native database solutions and distributed data processing.
Key Responsibilities:
- Contribute to the development and upkeep of a scalable data platform, incorporating tools and frameworks that leverage Azure and Databricks capabilities.
- Exhibit proficiency in various RDBMS databases such as MySQL and SQL-Server, emphasizing their integration in applications and pipeline development.
- Design and maintain high-caliber code, including data pipelines and applications, utilizing Python, Scala, and PHP.
- Implement effective data processing solutions via Apache Spark, optimizing Spark applications for large-scale data handling.
- Optimize data storage using formats like Parquet and Delta Lake to ensure efficient data accessibility and reliable performance.
- Demonstrate understanding of Hive Metastore, Unity Catalog Metastore, and the operational dynamics of external tables.
- Collaborate with diverse teams to convert business requirements into precise technical specifications.
Requirements:
- Bachelor’s degree in Computer Science, Engineering, or a related discipline.
- Demonstrated hands-on experience with Azure cloud services and Databricks.
- Proficient programming skills in Python, Scala, and PHP.
- In-depth knowledge of SQL, NoSQL databases, and data warehousing principles.
- Familiarity with distributed data processing and external table management.
- Insight into enterprise data solutions for PIM, CDP, MDM, and ERP applications.
- Exceptional problem-solving acumen and meticulous attention to detail.
Additional Qualifications :
- Acquaintance with data security and privacy standards.
- Experience in CI/CD pipelines and version control systems, notably Git.
- Familiarity with Agile methodologies and DevOps practices.
- Competence in technical writing for comprehensive documentation.
Technical Skills:
- Ability to understand and translate business requirements into design.
- Proficient in AWS infrastructure components such as S3, IAM, VPC, EC2, and Redshift.
- Experience in creating ETL jobs using Python/PySpark.
- Proficiency in creating AWS Lambda functions for event-based jobs.
- Knowledge of automating ETL processes using AWS Step Functions.
- Competence in building data warehouses and loading data into them.
Responsibilities:
- Understand business requirements and translate them into design.
- Assess AWS infrastructure needs for development work.
- Develop ETL jobs using Python/PySpark to meet requirements.
- Implement AWS Lambda for event-based tasks.
- Automate ETL processes using AWS Step Functions.
- Build data warehouses and manage data loading.
- Engage with customers and stakeholders to articulate the benefits of proposed solutions and frameworks.


Responsibilities
· Develop Python-based APIs using FastAPI and Flask frameworks.
· Develop Python-based Automation scripts and Libraries.
· Develop Front End Components using VueJS and ReactJS.
· Writing and modifying Docker files for the Back-End and Front-End Components.
· Integrate CI/CD pipelines for Automation and Code quality checks.
· Writing complex ORM mappings using SQLAlchemy.
Required Skills:
· Strong experience in Python development in a full stack environment is a requirement, including NodeJS, VueJS/Vuex, Flask, etc.
· Experience with SQLAchemy or similar ORM frameworks.
· Experience working with Geolocation APIs (e.g., Google Maps, Mapbox).
· Experience using Elasticsearch and Airflow is a plus.
· Strong knowledge of SQL, comfortable working with MySQL and/or PostgreSQL databases.
· Understand concepts of Data Modeling.
· Experience with REST.
· Experience with Git, GitFlow, and code review process.
· Good understanding of basic UI and UX principles.
· Project excellent problem-solving and communication skills.


Position Overview:
We are searching for an experienced Senior MERN Stack Developer to lead our development
efforts. Your expertise will drive the creation of cutting-edge web applications while
mentoring junior developers and contributing to technical strategy.
Key Responsibilities:
• Lead and participate in the architecture, design, and development of complex
applications.
• Mentor and guide junior developers, fostering skill development and growth.
• Collaborate with cross-functional teams to define technical roadmaps and strategies.
• Conduct code reviews and ensure adherence to coding standards and best practices.
• Stay updated with emerging technologies and advocate for their integration.
• Develop and maintain robust and scalable web applications using the MERN stack.
• Collaborate with front-end and back-end developers to define and implement
innovative solutions.
• Design and implement RESTful APIs for seamless integration between front-end and
back-end systems.
• Work closely with UI/UX designers to create responsive and visually appealing user
interfaces.
• Troubleshoot, debug and optimize code to ensure high performance and reliability.
• Implement security and data protection measures in line with industry best practices.
• Stay updated on emerging trends and technologies in web development.
Qualifications & Skills:
• Bachelor’s or Master’s degree in Computer Science or related field.
• Proven experience as a Senior MERN Stack Developer.
• Strong proficiency in React.js, Node.js, Express.js, and MongoDB.
• Strong proficiency in Typescript, JavaScript, HTML, and CSS.
• Familiarity with front-end frameworks like Bootstrap, Material-UI, etc.
• Experience with version control systems, such as Git.
• Knowledge of database design and management, including both SQL and NoSQL
databases.
• Leadership skills and the ability to guide and inspire a team.
• Excellent problem-solving abilities and a strategic mindset.
• Effective communication and collaboration skills.
• Knowledge of AWS and s3 cloud storage
Location: The position is based in Hyderabad.
Join us in revolutionizing education! If you are a passionate MERN Stack developer with a
vision for transforming education in line with NEP2020, we would love to hear from you. Apply
now and be part of an innovative team shaping the future of education.
Publicis Sapient Overview:
The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
.
Job Summary:
As Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. You are also required to have hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms.
Role & Responsibilities:
Your role is focused on Design, Development and delivery of solutions involving:
• Data Integration, Processing & Governance
• Data Storage and Computation Frameworks, Performance Optimizations
• Analytics & Visualizations
• Infrastructure & Cloud Computing
• Data Management Platforms
• Implement scalable architectural models for data processing and storage
• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time mode
• Build functionality for data analytics, search and aggregation
Experience Guidelines:
Mandatory Experience and Competencies:
# Competency
1.Overall 5+ years of IT experience with 3+ years in Data related technologies
2.Minimum 2.5 years of experience in Big Data technologies and working exposure in at least one cloud platform on related data services (AWS / Azure / GCP)
3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline.
4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable
5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc
6.Well-versed and working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
Preferred Experience and Knowledge (Good to Have):
# Competency
1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience
2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures
4.Performance tuning and optimization of data pipelines
5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality
6.Cloud data specialty and other related Big data technology certifications
Personal Attributes:
• Strong written and verbal communication skills
• Articulation skills
• Good team player
• Self-starter who requires minimal oversight
• Ability to prioritize and manage multiple tasks
• Process orientation and the ability to define and set up processes

Role: Python-Django Developer
Location: Noida, India
Description:
- Develop web applications using Python and Django.
- Write clean and maintainable code following best practices and coding standards.
- Collaborate with other developers and stakeholders to design and implement new features.
- Participate in code reviews and maintain code quality.
- Troubleshoot and debug issues as they arise.
- Optimize applications for maximum speed and scalability.
- Stay up-to-date with emerging trends and technologies in web development.
Requirements:
- Bachelor's or Master's degree in Computer Science, Computer Engineering or a related field.
- 4+ years of experience in web development using Python and Django.
- Strong knowledge of object-oriented programming and design patterns.
- Experience with front-end technologies such as HTML, CSS, and JavaScript.
- Understanding of RESTful web services.
- Familiarity with database technologies such as PostgreSQL or MySQL.
- Experience with version control systems such as Git.
- Ability to work in a team environment and communicate effectively with team members.
- Strong problem-solving and analytical skills.
Publicis Sapient Overview:
The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
.
Job Summary:
As Senior Associate L1 in Data Engineering, you will do technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. Having hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms will be preferable.
Role & Responsibilities:
Job Title: Senior Associate L1 – Data Engineering
Your role is focused on Design, Development and delivery of solutions involving:
• Data Ingestion, Integration and Transformation
• Data Storage and Computation Frameworks, Performance Optimizations
• Analytics & Visualizations
• Infrastructure & Cloud Computing
• Data Management Platforms
• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time
• Build functionality for data analytics, search and aggregation
Experience Guidelines:
Mandatory Experience and Competencies:
# Competency
1.Overall 3.5+ years of IT experience with 1.5+ years in Data related technologies
2.Minimum 1.5 years of experience in Big Data technologies
3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.
4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable
5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc
Preferred Experience and Knowledge (Good to Have):
# Competency
1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience
2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures
4.Performance tuning and optimization of data pipelines
5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality
6.Working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
7.Cloud data specialty and other related Big data technology certifications
Job Title: Senior Associate L1 – Data Engineering
Personal Attributes:
• Strong written and verbal communication skills
• Articulation skills
• Good team player
• Self-starter who requires minimal oversight
• Ability to prioritize and manage multiple tasks
• Process orientation and the ability to define and set up processes
Daily and monthly responsibilities
- Review and coordinate with business application teams on data delivery requirements.
- Develop estimation and proposed delivery schedules in coordination with development team.
- Develop sourcing and data delivery designs.
- Review data model, metadata and delivery criteria for solution.
- Review and coordinate with team on test criteria and performance of testing.
- Contribute to the design, development and completion of project deliverables.
- Complete in-depth data analysis and contribution to strategic efforts
- Complete understanding of how we manage data with focus on improvement of how data is sourced and managed across multiple business areas.
Basic Qualifications
- Bachelor’s degree.
- 5+ years of data analysis working with business data initiatives.
- Knowledge of Structured Query Language (SQL) and use in data access and analysis.
- Proficient in data management including data analytical capability.
- Excellent verbal and written communications also high attention to detail.
- Experience with Python.
- Presentation skills in demonstrating system design and data analysis solutions.
Role: Oracle DBA Developer
Location: Hyderabad
Required Experience: 8 + Years
Skills : DBA, Terraform, Ansible, Python, Shell Script, DevOps activities, Oracle DBA, SQL server, Cassandra, Oracle sql/plsql, MySQL/Oracle/MSSql/Mongo/Cassandra, Security measure configuration
Roles and Responsibilities:
1. 8+ years of hands-on DBA experience in one or many of the following: SQL Server, Oracle, Cassandra
2. DBA experience in a SRE environment will be an advantage.
3. Experience in Automation/building databases by providing self-service tools. analyze and implement solutions for database administration (e.g., backups, performance tuning, Troubleshooting, Capacity planning)
4. Analyze solutions and implement best practices for cloud database and their components.
5. Build and enhance tooling, automation, and CI/CD workflows (Jenkins etc.) that provide safe self-service capabilities to th6. Implement proactive monitoring and alerting to detect issues before they impact users. Use a metrics-driven approach to identify and root cause performance and scalability bottlenecks in the system.
7. Work on automation of database infrastructure and help engineering succeed by providing self-service tools.
8. Write database documentation, including data standards, procedures, and definitions for the data dictionary (metadata)
9. Monitor database performance, control access permissions and privileges, capacity planning, implement changes and apply new patches and versions when required.
10. Recommend query and schema changes to optimize the performance of database queries.
11. Have experience with cloud-based environments (OCI, AWS, Azure) as well as On-Premises.
12. Have experience with cloud database such as SQL server, Oracle, Cassandra
13. Have experience with infrastructure automation and configuration management (Jira, Confluence, Ansible, Gitlab, Terraform)
14. Have excellent written and verbal English communication skills.
15. Planning, managing, and scaling of data stores to ensure a business’ complex data requirements are met and it can easily access its data in a fast, reliable, and safe manner.
16. Ensures the quality of orchestration and integration of tools needed to support daily operations by patching together existing infrastructure with cloud solutions and additional data infrastructures.
17. Data Security and protecting the data through rigorous testing of backup and recovery processes and frequently auditing well-regulated security procedures.
18. use software and tooling to automate manual tasks and enable engineers to move fast without the concern of losing data during their experiments.
19. service level objectives (SLOs), risk analysis to determine which problems to address and which problems to automate.
20. Bachelor's Degree in a technical discipline required.
21. DBA Certifications required: Oracle, SQLServer, Cassandra (2 or more)
21. Cloud, DevOps certifications will be an advantage.
Must have Skills:
Ø Oracle DBA with development
Ø SQL
Ø Devops tools
Ø Cassandra

Required Qualifications:
∙Bachelor’s degree in computer science, Information Technology, or related field, or equivalent experience.
∙5+ years of experience in a DevOps role, preferably for a SaaS or software company.
∙Expertise in cloud computing platforms (e.g., AWS, Azure, GCP).
∙Proficiency in scripting languages (e.g., Python, Bash, Ruby).
∙Extensive experience with CI/CD tools (e.g., Jenkins, GitLab CI, Travis CI).
∙Extensive experience with NGINX and similar web servers.
∙Strong knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes).
∙Familiarity with infrastructure-as-code tools (e.g. Terraform, CloudFormation).
∙Ability to work on-call as needed and respond to emergencies in a timely manner.
∙Experience with high transactional e-commerce platforms.
Preferred Qualifications:
∙Certifications in cloud computing or DevOps are a plus (e.g., AWS Certified DevOps Engineer,
Azure DevOps Engineer Expert).
∙Experience in a high availability, 24x7x365 environment.
∙Strong collaboration, communication, and interpersonal skills.
∙Ability to work independently and as part of a team.

Job Description:-
Design, develop IoT/Cloud-based Typescript/ JavaScript/ Node.JS applications using
Amazon Cloud Computing Services.
Work closely with onsite, offshore, and cross functional teams, Product Management, UI/UX developers, Web and Mobile developers, SQA teams to effectively use technologies to build and deliver high quality and on-time delivery of IoT applications
Bug and issue resolution
Proactively Identify risks and failure modes early in the development lifecycle and develop.
POCs to mitigate the risks early in the program.
Assertive communication and team skills
Primary Skills:
Hands on experience (3+ years) in AWS cloud native environment with work experience in
AWS Lambda, Kinesis, DynamoDB
3+ years’ experience in working with NodeJS, Python, Unit Testing and Git
3+ years in work experience with document, relational or timeseries databases
2+ years in work experience with typescript.
1+ years in IaaS framework like Serverless or CDK with CloudFormation knowledge