50+ Python Jobs in Hyderabad | Python Job openings in Hyderabad
Apply to 50+ Python Jobs in Hyderabad on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.
We seek a skilled AI Engineer with a background in AI research, machine learning, and data science to join our innovative team! This is an exciting opportunity for an individual with extensive AI experience eager to work on cutting-edge projects, turning research into practical AI applications. You will collaborate with cross-functional teams to design, develop, and implement AI solutions that enhance business operations and decision-making processes.
What We’re Looking For
● Education: Bachelor’s or Master’s degree in Computer Science, Artificial Intelligence, Data Science, or a related field. A PhD in AI or a related discipline is highly desirable.
● Experience:
○ Proven experience in AI research and implementation, with a deep
understanding of both theoretical and practical aspects of AI.
○ Strong proficiency in machine learning (ML), data science, and deep learning techniques.
○ Hands-on experience with Python and ML libraries such as TensorFlow, PyTorch, Scikit-learn, etc.
○ Experience with data preprocessing, feature engineering, and data
visualization.
○ Familiarity with cloud platforms such as AWS, Azure, or Google Cloud for AI and ML deployment.
○ Strong analytical and problem-solving skills.
○ Ability to translate AI research into practical applications and solutions.
○ Knowledge of AI model evaluation techniques and performance optimization.
○ Strong communication skills for presenting research and technical details to non- technical stakeholders.
○ Ability to work independently and in team environments.
Preferred Qualifications
● Experience working with natural language processing (NLP), computer vision, or reinforcement learning.
What You’ll Be Doing
● Conduct advanced AI research to develop innovative solutions for business challenges.
● Design, develop, and train machine learning models, including supervised, unsupervised, and deep learning algorithms.
● Collaborate with data scientists to analyze large datasets, identify patterns, and extract actionable insights.
● Work with software development teams to integrate AI models into production environments.
● Leverage state-of-the-art AI tools, frameworks, and libraries to accelerate AI development.
● Optimize AI models for accuracy, performance, and scalability in real-world applications.
● Collaborate closely with cross-functional teams, including product managers, software engineers, and data scientists, to implement AI-driven solutions.
● Document AI research outcomes, development processes, and performance metrics. Present findings to stakeholders in an easily understandable manner.
Location: Hyderabad, Hybrid
We seek an experienced Lead Data Scientist to support our growing team of talented engineers in the Insurtech space. As a proven leader, you will guide, mentor, and drive the team to build, deploy, and scale machine learning models that deliver impactful business solutions. This role is crucial for ensuring the team provides real-world, scalable results. In this position, you will play a key role, not only by developing models but also by leading cross-functional teams, structuring and understanding data, and providing actionable insights to the business.
What We’re Looking For
● Proven experience as a Lead Machine Learning Engineer, Data Scientist, or a similar role, with extensive leadership experience managing and mentoring engineering teams.
● Bachelor’s or Master’s degree in Computer Science, Data Science, Mathematics, or a related field.
● Demonstrated success in leading teams that have designed, deployed, and scaled machine learning models in production environments, delivering tangible business results.
● Expertise in machine learning algorithms, deep learning, and data mining techniques, ideally in an enterprise setting.
● Hands-on experience with natural language processing (NLP) and data extraction tools.
● Proficiency in Python (or other programming languages), and with libraries such as Scikit-learn, TensorFlow, PyTorch, etc.
● Strong experience with MLOps processes, including model development, deployment, and monitoring at scale.
● Strong leadership and analytical skills, capable of structuring and interpreting data to provide meaningful business insights and guide decisions.
● A practical understanding of aligning machine learning initiatives with business objectives.
● Industry experience in insurance is a plus but not required.
What You’ll Be Doing
● Lead and mentor a team of strong Machine Learning Engineers, synthesizing their skills and ideas into scalable, impactful solutions.
● Have a proven track record in successfully leading teams in the development, deployment, and scaling of machine learning models to solve real-world business problems.
● Provide leadership in structuring, analyzing, and interpreting data, delivering insights to the business side to support informed decision-making.
● Oversee the full lifecycle of machine learning models – from development and training to deployment and monitoring in production environments.
● Collaborate closely with cross-functional teams to align AI efforts with business goals
and ensure measurable impacts.
● Leverage deep expertise in machine learning, data science, or related fields to enhance document interpretation tasks such as text extraction, classification, and semantic analysis.
● Utilize natural language processing (NLP) to improve our AI-based Intelligent Document Processing platform.
● Implement MLOps best practices, ensuring smooth deployment, monitoring, and maintenance of models in production.
● Continuously explore and experiment with new technologies and techniques to push the boundaries of AI solutions within the organization.
● Lead by example, fostering a culture of innovation and collaboration within the team while effectively communicating technical insights to both technical and non-technical stakeholders.
Candidates not located in Hyderabad will not be taken into consideration.
Location: Hyderabad (Work From Office)
TechnoIdentity is on the lookout for talented and driven individuals to join our IT team as Dev Trainee. If you’re passionate about programming, enthusiastic about learning, and ready to take on challenges to deliver exceptional solutions, this opportunity is for you.
We value team members who:
- Demonstrate curiosity and eagerness to adopt new technologies.
- Exhibit logical thinking and a keen problem-solving ability.
- Are proficient in at least one programming language and have hands-on experience with writing and debugging code.
- Communicate effectively, both verbally and in writing.
- Are passionate learners with a desire to create meaningful solutions that address real-world business problems.
- Have a solid understanding of user experience and design principles.
Responsibilities:
As a Trainee Software Engineer, you will:
- Work on building and maintaining both frontend and backend components of web applications.
- Collaborate with cross-functional teams to ensure seamless integration of application components.
- Write clean, efficient, and well-documented code, adhering to good coding practices.
- Develop unit tests and perform debugging to ensure quality and functionality.
- Stay up-to-date with emerging trends and technologies in software development.
Requirements:
- Proficiency in at least one programming language such as Python, Java, JavaScript, or HTML/CSS.
- Familiarity with frontend frameworks like ReactJS (or experience in TypeScript) is a significant advantage.
- Basic understanding of backend technologies (e.g., Node.js, Django, Flask) is a plus.
- Knowledge of the software development life cycle (SDLC) and web development workflows.
- Ability to analyze and break down real-world problems with an eye for creating practical solutions.
- Exposure to version control tools like Git is a bonus.
Preferred Skills:
- Hands-on experience with modern frontend libraries/frameworks (e.g., ReactJS, Angular).
- Understanding of REST APIs and how frontend interacts with backend systems.
- Familiarity with database systems (e.g., MySQL, MongoDB).
- Strong analytical skills coupled with a creative approach to problem-solving.
Why Join Us?
- Opportunity to work with cutting-edge technologies in a collaborative environment.
- Supportive culture that encourages continuous learning and growth.
- A chance to develop innovative solutions that make an impact.
If you’re a tech enthusiast ready to embark on a rewarding career, we’d love to hear from you! Apply today and take the first step toward becoming part of our dynamic team.
We are seeking an experienced Senior Golang Developer to join our dynamic engineering team at our Hyderabad office (Hybrid option available).
What You'll Do:
- Collaborate with a team of engineers to design, develop, and support web and mobile applications using Golang.
- Work in a fast-paced agile environment, delivering high-quality solutions focused on continuous innovation.
- Tackle complex technical challenges with creativity and out-of-the-box thinking.
- Take ownership of critical components and gradually assume responsibility for significant portions of the product.
- Develop robust, scalable, and performant backend systems using Golang.
- Contribute to all phases of the development lifecycle, including design, coding, testing, and deployment.
- Build and maintain SQL and NoSQL databases to support application functionality.
- Document your work and collaborate effectively with cross-functional teams, including QA, engineering, and business units.
- Work with global teams to architect solutions, provide estimates, reduce complexity, and deliver a world-class platform.
Who Should Apply:
- 5+ years of experience in backend development with a strong focus on Golang.
- Proficient in building and deploying RESTful APIs and microservices.
- Experience with SQL and NoSQL databases (e.g., MySQL, MongoDB).
- Familiarity with cloud platforms such as AWS and strong Linux skills.
- Hands-on experience with containerization and orchestration tools like Docker and Kubernetes.
- Knowledge of system design principles, scalability, and high availability.
- Exposure to frontend technologies like React or mobile development is a plus.
- Experience working in an Agile/Scrum environment.
Building the machine learning production (or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. This position is an opportunity for an experienced, server-side developer to build expertise in this exciting new frontier. You will be part of a team deploying state-of-the-art AI solutions for Fractal clients.
Responsibilities
As MLOps Engineer, you will work collaboratively with Data Scientists and Data engineers to deploy and operate advanced analytics machine learning models. You’ll help automate and streamline Model development and Model operations. You’ll build and maintain tools for deployment, monitoring, and operations. You’ll also troubleshoot and resolve issues in development, testing, and production environments.
- Enable Model tracking, model experimentation, Model automation
- Develop ML pipelines to support
- Develop MLOps components in Machine learning development life cycle using Model Repository (either of): MLFlow, Kubeflow Model Registry
- Develop MLOps components in Machine learning development life cycle using Machine Learning Services (either of): Kubeflow, DataRobot, HopsWorks, Dataiku or any relevant ML E2E PaaS/SaaS
- Work across all phases of Model development life cycle to build MLOPS components
- Build the knowledge base required to deliver increasingly complex MLOPS projects on Azure
- Be an integral part of client business development and delivery engagements across multiple domains
Required Qualifications
- 3-5 years experience building production-quality software.
- B.E/B.Tech/M.Tech in Computer Science or related technical degree OR Equivalent
- Strong experience in System Integration, Application Development or Data Warehouse projects across technologies used in the enterprise space
- Knowledge of MLOps, machine learning and docker
- Object-oriented languages (e.g. Python, PySpark, Java, C#, C++)
- CI/CD experience( i.e. Jenkins, Git hub action,
- Database programming using any flavors of SQL
- Knowledge of Git for Source code management
- Ability to collaborate effectively with highly technical resources in a fast-paced environment
- Ability to solve complex challenges/problems and rapidly deliver innovative solutions
- Foundational Knowledge of Cloud Computing on Azure
- Hunger and passion for learning new skills
Building the machine learning production System(or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. This position is an opportunity for an experienced, server-side developer to build expertise in this exciting new frontier. You will be part of a team deploying state-ofthe-art AI solutions for Fractal clients.
Responsibilities
As MLOps Engineer, you will work collaboratively with Data Scientists and Data engineers to deploy and operate advanced analytics machine learning models. You’ll help automate and streamline Model development and Model operations. You’ll build and maintain tools for deployment, monitoring, and operations. You’ll also troubleshoot and resolve issues in development, testing, and production environments.
- Enable Model tracking, model experimentation, Model automation
- Develop scalable ML pipelines
- Develop MLOps components in Machine learning development life cycle using Model Repository (either of): MLFlow, Kubeflow Model Registry
- Machine Learning Services (either of): Kubeflow, DataRobot, HopsWorks, Dataiku or any relevant ML E2E PaaS/SaaS
- Work across all phases of Model development life cycle to build MLOPS components
- Build the knowledge base required to deliver increasingly complex MLOPS projects on Azure
- Be an integral part of client business development and delivery engagements across multiple domains
Required Qualifications
- 5.5-9 years experience building production-quality software
- B.E/B.Tech/M.Tech in Computer Science or related technical degree OR equivalent
- Strong experience in System Integration, Application Development or Datawarehouse projects across technologies used in the enterprise space
- Expertise in MLOps, machine learning and docker
- Object-oriented languages (e.g. Python, PySpark, Java, C#, C++)
- Experience developing CI/CD components for production ready ML pipeline.
- Database programming using any flavors of SQL
- Knowledge of Git for Source code management
- Ability to collaborate effectively with highly technical resources in a fast-paced environment
- Ability to solve complex challenges/problems and rapidly deliver innovative solutions
- Team handling, problem solving, project management and communication skills & creative thinking
- Foundational Knowledge of Cloud Computing on Azure
- Hunger and passion for learning new skills
Responsibilities
- Design and implement advanced solutions utilizing Large Language Models (LLMs).
- Demonstrate self-driven initiative by taking ownership and creating end-to-end solutions.
- Conduct research and stay informed about the latest developments in generative AI and LLMs.
- Develop and maintain code libraries, tools, and frameworks to support generative AI development.
- Participate in code reviews and contribute to maintaining high code quality standards.
- Engage in the entire software development lifecycle, from design and testing to deployment and maintenance.
- Collaborate closely with cross-functional teams to align messaging, contribute to roadmaps, and integrate software into different repositories for core system compatibility.
- Possess strong analytical and problem-solving skills.
- Demonstrate excellent communication skills and the ability to work effectively in a team environment.
Primary Skills
- Generative AI: Proficiency with SaaS LLMs, including Lang chain, llama index, vector databases, Prompt engineering (COT, TOT, ReAct, agents). Experience with Azure OpenAI, Google Vertex AI, AWS Bedrock for text/audio/image/video modalities.
- Familiarity with Open-source LLMs, including tools like TensorFlow/Pytorch and Huggingface. Techniques such as quantization, LLM finetuning using PEFT, RLHF, data annotation workflow, and GPU utilization.
- Cloud: Hands-on experience with cloud platforms such as Azure, AWS, and GCP. Cloud certification is preferred.
- Application Development: Proficiency in Python, Docker, FastAPI/Django/Flask, and Git.
- Natural Language Processing (NLP): Hands-on experience in use case classification, topic modeling, Q&A and chatbots, search, Document AI, summarization, and content generation.
- Computer Vision and Audio: Hands-on experience in image classification, object detection, segmentation, image generation, audio, and video analysis.
- Data Scientist with 4+ yrs of experience
- Good working experience in Computer vision and ML engineering
- Strong knowledge of statistical modeling, hypothesis testing, and regression analysis
- Should be developing APIs
- Proficiency in Python, SQL
- Should have Azure knowledge
- Basic knowledge of NLP
- Analytical thinking and problem-solving abilities
- Excellent communication, Strong collaboration skills
- Should be able to work independently
- Attention to detail and commitment to high-quality results
- Adaptability to fast-paced, dynamic environments
Job Description
We are looking for a talented Java Developer to work in abroad countries. You will be responsible for developing high-quality software solutions, working on both server-side components and integrations, and ensuring optimal performance and scalability.
Preferred Qualifications
- Experience with microservices architecture.
- Knowledge of cloud platforms (AWS, Azure).
- Familiarity with Agile/Scrum methodologies.
- Understanding of front-end technologies (HTML, CSS, JavaScript) is a plus.
Requirment Details
Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent experience).
Proven experience as a Java Developer or similar role.
Strong knowledge of Java programming language and its frameworks (Spring, Hibernate).
Experience with relational databases (e.g., MySQL, PostgreSQL) and ORM tools.
Familiarity with RESTful APIs and web services.
Understanding of version control systems (e.g., Git).
Solid understanding of object-oriented programming (OOP) principles.
Strong problem-solving skills and attention to detail.
at golden eagle it technologies pvt ltd
Key Responsibilities:
● Work closely with product managers, designers, frontend developers, and other
cross-functional teams to ensure the seamless integration and alignment of frontend and
backend technologies, driving cohesive and high-quality product delivery.
● Develop and implement coding standards and best practices for the backend team.
● Document technical specifications and procedures.
● Stay up-to-date with the latest backend technologies, trends, and best practices.
● Collaborate with other departments to identify and address backend-related issues.
● Conduct code reviews and ensure code quality and consistency across the backend team.
● Create technical documentation, ensuring clarity for future development and
maintenance.
Requirements;
● Experience: 4-6 years of hands-on experience in backend development, with a strong
background in product-based companies or startups.
● Education: Bachelor’s degree or above in Computer Science or a related field.
● Programming skills: Proficient in Python and software development principles, with a
focus on clean, maintainable code, and industry best practices. Experienced in unit
testing, AI-driven code reviews, version control with Git, CI/CD pipelines using GitHub
Actions, and integrating New Relic for logging and APM into backend systems.
● Database Development: Proficiency in developing and optimizing backend systems in
both relational and non-relational database environments, such as MySQL and NoSQL
databases.
● GraphQL: Proven experience in developing and managing robust GraphQL APIs,
preferably using Apollo Server. Ability to design type-safe GraphQL schemas and
resolvers, ensuring seamless integration and high performance.
● Cloud Platforms: Familiar with AWS and experienced in Docker containerization and
orchestrating containerized systems.
● System Architecture: Proficient in system design and architecture with experience in
developing multi-tenant platforms, including security implementation, user onboarding,
payment integration, and scalable architecture.
● Linux Systems: Familiarity with Linux systems is mandatory, including deployment and
management.
● Continuous Learning: Stay current with industry trends and emerging technologies to
influence architectural decisions and drive continuous improvement.
Benefits:
● Competitive salary.
● Health insurance.
● Casual dress code.
● Dynamic & Collaboration friendly office.
● Hybrid work schedule.
Industry
- IT Services and IT Consulting
Employment Type
Full-time
The Sr. Analytics Engineer would provide technical expertise in needs identification, data modeling, data movement, and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective.
Understands and leverages best-fit technologies (e.g., traditional star schema structures, cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges.
Provides data understanding and coordinates data-related activities with other data management groups such as master data management, data governance, and metadata management.
Actively participates with other consultants in problem-solving and approach development.
Responsibilities :
Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs.
Perform data analysis to validate data models and to confirm the ability to meet business needs.
Assist with and support setting the data architecture direction, ensuring data architecture deliverables are developed, ensuring compliance to standards and guidelines, implementing the data architecture, and supporting technical developers at a project or business unit level.
Coordinate and consult with the Data Architect, project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels.
Work closely with Business Analysts and Solution Architects to design the data model satisfying the business needs and adhering to Enterprise Architecture.
Coordinate with Data Architects, Program Managers and participate in recurring meetings.
Help and mentor team members to understand the data model and subject areas.
Ensure that the team adheres to best practices and guidelines.
Requirements :
- Strong working knowledge of at least 3 years of Spark, Java/Scala/Pyspark, Kafka, Git, Unix / Linux, and ETL pipeline designing.
- Experience with Spark optimization/tuning/resource allocations
- Excellent understanding of IN memory distributed computing frameworks like Spark and its parameter tuning, writing optimized workflow sequences.
- Experience of relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., Redshift, Bigquery, Cassandra, etc).
- Familiarity with Docker, Kubernetes, Azure Data Lake/Blob storage, AWS S3, Google Cloud storage, etc.
- Have a deep understanding of the various stacks and components of the Big Data ecosystem.
- Hands-on experience with Python is a huge plus
Hi,
I am HR from Janapriya school , Miyapur , Hyderabad , Telangana.
Currently we are looking for a primary computer teacher .
the teacher should have atleast 2 years experience in teaching computers .
Intrested candidates can apply to the above posting.
Mandatory Skills
- C/C++ Programming
- Linux System concepts
- Good Written and verbal communication skills
- Good problem-solving skills
- Python scripting experience
- Prior experience in Continuous Integration and Build System is a plus
- SCM tools like git, perforce etc is a plus
- Repo, Git and Gerrit tools
- Android Build system expertise
- Automation development experience with like Electric Commander, Jenkins, Hudson
The Technical Lead will oversee all aspects of application development at TinyPal. This position involves both managing the development team and actively contributing to the coding and architecture, particularly in the backend development using Python. The ideal candidate will bring a strategic perspective to the development process, ensuring that our solutions are robust, scalable, and aligned with our business goals.
Key Responsibilities:
- Lead and manage the application development team across all areas, including backend, frontend, and mobile app development.
- Hands-on development and oversight of backend systems using Python, ensuring high performance, scalability, and integration with frontend services.
- Architect and design innovative solutions that meet market needs and are aligned with the company’s technology strategy, with a strong focus on embedding AI technologies to enhance app functionalities.
- Coordinate with product managers and other stakeholders to translate business needs into technical strategies, particularly in leveraging AI to solve complex problems and improve user experiences.
- Maintain high standards of software quality by establishing good practices and habits within the development team.
- Evaluate and incorporate new technologies and tools to improve application development processes, with a particular emphasis on AI and machine learning technologies.
- Mentor and support team members to foster a collaborative and productive environment.
- Lead the deployment and continuous integration of applications across various platforms, ensuring AI components are well integrated and perform optimally.
Required Skills and Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field.
- Minimum of 7 years of experience in software development, with at least 1 year in a leadership role.
- Expert proficiency in Python and experience with frameworks like Django or Flask.
- Broad experience in full lifecycle development of large-scale applications.
- Strong architectural understanding of both frontend and backend technologies, with a specific capability in integrating AI into complex systems.
- Experience with cloud platforms (AWS, Azure, Google Cloud), and understanding of DevOps and CI/CD processes.
- Demonstrated ability to think strategically about business, product, and technical challenges, including the adoption and implementation of AI solutions.
- Excellent team management, communication, and interpersonal skills.
Description:
We are looking for a highly motivated Full Stack Backend Software Intern to join our team. The ideal candidate should have a strong interest in AI, LLM (Large Language Models), and related technologies, along with the ability to work independently and complete tasks with minimal supervision.
Responsibilities:
- Research and gather requirements for backend software projects.
- Develop, test, and maintain backend components of web applications.
- Collaborate with front-end developers to integrate user-facing elements with server-side logic.
- Optimize applications for maximum speed and scalability.
- Implement security and data protection measures.
- Stay up-to-date with emerging technologies and industry trends.
- Complete tasks with minimal hand-holding and supervision.
- Assist with frontend tasks using JavaScript and React if required.
Requirements:
- Proficiency in backend development languages such as Python or Node.js
- Familiarity with frontend technologies like HTML, CSS, JavaScript, and React.
- Experience with relational and non-relational databases.
- Understanding of RESTful APIs and microservices architecture.
- Knowledge of AI, LLM, and related technologies is a plus.
- Ability to work independently and complete tasks with minimal supervision.
- Strong problem-solving skills and attention to detail.
- Excellent communication and teamwork skills.
- Currently pursuing or recently completed a degree in Computer Science or related field.
Benefits:
- Opportunity to work on cutting-edge technologies in AI and LLM.
- Hands-on experience in developing backend systems for web applications.
- Mentorship from experienced developers and engineers.
- Flexible working hours and a supportive work environment.
- Possibility of a full-time position based on performance.
If you are passionate about backend development, AI, and LLM, and are eager to learn and grow in a dynamic environment, we would love to hear from you. Apply now to join our team as a Full Stack Backend Software Intern.
We are actively seeking a self-motivated Data Engineer with expertise in Azure cloud and Databricks, with a thorough understanding of Delta Lake and Lake-house Architecture. The ideal candidate should excel in developing scalable data solutions, crafting platform tools, and integrating systems, while demonstrating proficiency in cloud-native database solutions and distributed data processing.
Key Responsibilities:
- Contribute to the development and upkeep of a scalable data platform, incorporating tools and frameworks that leverage Azure and Databricks capabilities.
- Exhibit proficiency in various RDBMS databases such as MySQL and SQL-Server, emphasizing their integration in applications and pipeline development.
- Design and maintain high-caliber code, including data pipelines and applications, utilizing Python, Scala, and PHP.
- Implement effective data processing solutions via Apache Spark, optimizing Spark applications for large-scale data handling.
- Optimize data storage using formats like Parquet and Delta Lake to ensure efficient data accessibility and reliable performance.
- Demonstrate understanding of Hive Metastore, Unity Catalog Metastore, and the operational dynamics of external tables.
- Collaborate with diverse teams to convert business requirements into precise technical specifications.
Requirements:
- Bachelor’s degree in Computer Science, Engineering, or a related discipline.
- Demonstrated hands-on experience with Azure cloud services and Databricks.
- Proficient programming skills in Python, Scala, and PHP.
- In-depth knowledge of SQL, NoSQL databases, and data warehousing principles.
- Familiarity with distributed data processing and external table management.
- Insight into enterprise data solutions for PIM, CDP, MDM, and ERP applications.
- Exceptional problem-solving acumen and meticulous attention to detail.
Additional Qualifications :
- Acquaintance with data security and privacy standards.
- Experience in CI/CD pipelines and version control systems, notably Git.
- Familiarity with Agile methodologies and DevOps practices.
- Competence in technical writing for comprehensive documentation.
Technical Skills:
- Ability to understand and translate business requirements into design.
- Proficient in AWS infrastructure components such as S3, IAM, VPC, EC2, and Redshift.
- Experience in creating ETL jobs using Python/PySpark.
- Proficiency in creating AWS Lambda functions for event-based jobs.
- Knowledge of automating ETL processes using AWS Step Functions.
- Competence in building data warehouses and loading data into them.
Responsibilities:
- Understand business requirements and translate them into design.
- Assess AWS infrastructure needs for development work.
- Develop ETL jobs using Python/PySpark to meet requirements.
- Implement AWS Lambda for event-based tasks.
- Automate ETL processes using AWS Step Functions.
- Build data warehouses and manage data loading.
- Engage with customers and stakeholders to articulate the benefits of proposed solutions and frameworks.
Responsibilities
· Develop Python-based APIs using FastAPI and Flask frameworks.
· Develop Python-based Automation scripts and Libraries.
· Develop Front End Components using VueJS and ReactJS.
· Writing and modifying Docker files for the Back-End and Front-End Components.
· Integrate CI/CD pipelines for Automation and Code quality checks.
· Writing complex ORM mappings using SQLAlchemy.
Required Skills:
· Strong experience in Python development in a full stack environment is a requirement, including NodeJS, VueJS/Vuex, Flask, etc.
· Experience with SQLAchemy or similar ORM frameworks.
· Experience working with Geolocation APIs (e.g., Google Maps, Mapbox).
· Experience using Elasticsearch and Airflow is a plus.
· Strong knowledge of SQL, comfortable working with MySQL and/or PostgreSQL databases.
· Understand concepts of Data Modeling.
· Experience with REST.
· Experience with Git, GitFlow, and code review process.
· Good understanding of basic UI and UX principles.
· Project excellent problem-solving and communication skills.
Position Overview:
We are searching for an experienced Senior MERN Stack Developer to lead our development
efforts. Your expertise will drive the creation of cutting-edge web applications while
mentoring junior developers and contributing to technical strategy.
Key Responsibilities:
• Lead and participate in the architecture, design, and development of complex
applications.
• Mentor and guide junior developers, fostering skill development and growth.
• Collaborate with cross-functional teams to define technical roadmaps and strategies.
• Conduct code reviews and ensure adherence to coding standards and best practices.
• Stay updated with emerging technologies and advocate for their integration.
• Develop and maintain robust and scalable web applications using the MERN stack.
• Collaborate with front-end and back-end developers to define and implement
innovative solutions.
• Design and implement RESTful APIs for seamless integration between front-end and
back-end systems.
• Work closely with UI/UX designers to create responsive and visually appealing user
interfaces.
• Troubleshoot, debug and optimize code to ensure high performance and reliability.
• Implement security and data protection measures in line with industry best practices.
• Stay updated on emerging trends and technologies in web development.
Qualifications & Skills:
• Bachelor’s or Master’s degree in Computer Science or related field.
• Proven experience as a Senior MERN Stack Developer.
• Strong proficiency in React.js, Node.js, Express.js, and MongoDB.
• Strong proficiency in Typescript, JavaScript, HTML, and CSS.
• Familiarity with front-end frameworks like Bootstrap, Material-UI, etc.
• Experience with version control systems, such as Git.
• Knowledge of database design and management, including both SQL and NoSQL
databases.
• Leadership skills and the ability to guide and inspire a team.
• Excellent problem-solving abilities and a strategic mindset.
• Effective communication and collaboration skills.
• Knowledge of AWS and s3 cloud storage
Location: The position is based in Hyderabad.
Join us in revolutionizing education! If you are a passionate MERN Stack developer with a
vision for transforming education in line with NEP2020, we would love to hear from you. Apply
now and be part of an innovative team shaping the future of education.
Publicis Sapient Overview:
The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
.
Job Summary:
As Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. You are also required to have hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms.
Role & Responsibilities:
Your role is focused on Design, Development and delivery of solutions involving:
• Data Integration, Processing & Governance
• Data Storage and Computation Frameworks, Performance Optimizations
• Analytics & Visualizations
• Infrastructure & Cloud Computing
• Data Management Platforms
• Implement scalable architectural models for data processing and storage
• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time mode
• Build functionality for data analytics, search and aggregation
Experience Guidelines:
Mandatory Experience and Competencies:
# Competency
1.Overall 5+ years of IT experience with 3+ years in Data related technologies
2.Minimum 2.5 years of experience in Big Data technologies and working exposure in at least one cloud platform on related data services (AWS / Azure / GCP)
3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline.
4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable
5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc
6.Well-versed and working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
Preferred Experience and Knowledge (Good to Have):
# Competency
1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience
2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures
4.Performance tuning and optimization of data pipelines
5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality
6.Cloud data specialty and other related Big data technology certifications
Personal Attributes:
• Strong written and verbal communication skills
• Articulation skills
• Good team player
• Self-starter who requires minimal oversight
• Ability to prioritize and manage multiple tasks
• Process orientation and the ability to define and set up processes
Role: Python-Django Developer
Location: Noida, India
Description:
- Develop web applications using Python and Django.
- Write clean and maintainable code following best practices and coding standards.
- Collaborate with other developers and stakeholders to design and implement new features.
- Participate in code reviews and maintain code quality.
- Troubleshoot and debug issues as they arise.
- Optimize applications for maximum speed and scalability.
- Stay up-to-date with emerging trends and technologies in web development.
Requirements:
- Bachelor's or Master's degree in Computer Science, Computer Engineering or a related field.
- 4+ years of experience in web development using Python and Django.
- Strong knowledge of object-oriented programming and design patterns.
- Experience with front-end technologies such as HTML, CSS, and JavaScript.
- Understanding of RESTful web services.
- Familiarity with database technologies such as PostgreSQL or MySQL.
- Experience with version control systems such as Git.
- Ability to work in a team environment and communicate effectively with team members.
- Strong problem-solving and analytical skills.
Publicis Sapient Overview:
The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
.
Job Summary:
As Senior Associate L1 in Data Engineering, you will do technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. Having hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms will be preferable.
Role & Responsibilities:
Job Title: Senior Associate L1 – Data Engineering
Your role is focused on Design, Development and delivery of solutions involving:
• Data Ingestion, Integration and Transformation
• Data Storage and Computation Frameworks, Performance Optimizations
• Analytics & Visualizations
• Infrastructure & Cloud Computing
• Data Management Platforms
• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time
• Build functionality for data analytics, search and aggregation
Experience Guidelines:
Mandatory Experience and Competencies:
# Competency
1.Overall 3.5+ years of IT experience with 1.5+ years in Data related technologies
2.Minimum 1.5 years of experience in Big Data technologies
3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.
4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable
5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc
Preferred Experience and Knowledge (Good to Have):
# Competency
1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience
2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures
4.Performance tuning and optimization of data pipelines
5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality
6.Working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
7.Cloud data specialty and other related Big data technology certifications
Job Title: Senior Associate L1 – Data Engineering
Personal Attributes:
• Strong written and verbal communication skills
• Articulation skills
• Good team player
• Self-starter who requires minimal oversight
• Ability to prioritize and manage multiple tasks
• Process orientation and the ability to define and set up processes
Daily and monthly responsibilities
- Review and coordinate with business application teams on data delivery requirements.
- Develop estimation and proposed delivery schedules in coordination with development team.
- Develop sourcing and data delivery designs.
- Review data model, metadata and delivery criteria for solution.
- Review and coordinate with team on test criteria and performance of testing.
- Contribute to the design, development and completion of project deliverables.
- Complete in-depth data analysis and contribution to strategic efforts
- Complete understanding of how we manage data with focus on improvement of how data is sourced and managed across multiple business areas.
Basic Qualifications
- Bachelor’s degree.
- 5+ years of data analysis working with business data initiatives.
- Knowledge of Structured Query Language (SQL) and use in data access and analysis.
- Proficient in data management including data analytical capability.
- Excellent verbal and written communications also high attention to detail.
- Experience with Python.
- Presentation skills in demonstrating system design and data analysis solutions.
GradRight is an ed-fin-tech startup focused on global higher education. Using data science, technology and strategic partnerships across the industry, we enable students to find the “Right University” at the “Right Cost”. We are on a mission to aid a million students to find their best-fit universities and financial offerings by 2025.
Our flagship product - FundRight is the world’s first student loan bidding platform. In a short span of 10 months, we have facilitated disbursements of more than $ 50 million in loans this year and we are poised to scale up rapidly.
We are launching our second product - SelectRight as an innovative approach to college selection and student recruitment for students and universities, respectively. The product rests on the three pillars of data science, transparency and ethics and hopes to create value for students and universities.
Brief:
We are pursuing a complex set of problems that involve building for an international audience and for an industry that has largely been service-centric. As a Principal Engineer at GradRight, you’ll bring an unmatched customer-centricity to your work, with a focus on building for the long term and large scale.
You’ll drive the creation of frameworks that enable flexible/scalable customer journeys and tie them with institutional knowledge to help us build the best experiences for students and our partners. You’ll also manage a team of high performers to achieve the planned outcomes.
You’ll own the technology strategy of the engineering organization and be a key decision maker when it comes to processes and execution.
Responsibilities:
- Drive design discussions and decisions around building scalable and modular architecture
- Work with product, engineering and business teams to ideate on technology strategy and line up initiatives around the same
- Build clean, modular and scalable backend services
- Build clean, modular and scalable frontends
- Own quality and velocity of releases across the engineering organization
- Manage and mentor a team of engineers
- Participate in sprint ceremonies and actively contribute to scaling the engineering organization from a process perspective
- Stay on top of the software engineering ecosystem, propose and implement new technologies/methodologies as per the business needs
- Contribute to engineering hiring by conducting interviews
- Champion infrastructure-as-code mindset and encourage automation
- Identify problems around engineering processes, propose solutions and drive implementations for the same
Requirements:
- At least 8 years of experience, building large scale applications
- Experience working at startups in growth phase with war stories to share
- Experience with frontend technologies like vue.js or react.js
- Strong experience with at least one backend framework, preferably express.js
- Extensive experience in at least one programming language (preferably Javascript, GoLang) and ability to write maintainable, scalable and unit-testable code
- Experience in CI/CD and cloud infrastructure management
- Strong understanding of software design principles and patterns
- Excellent command over data structures and algorithms
- Passion for solving complex problems
- Good understanding of various database technologies with strong opinions around their use cases
- Experience with performance monitoring and scaling backend services
- Experience with microservices and distributed systems in general
- Experience with team management
- Excellent written and verbal communication skills
Good to have:
- Worked on products that addressed an international audience
- Worked on products that scaled to millions of users
- Exposure to emerging/latest technologies like blockchain, bots, AR/VR
- Exposure to AI/ML
at Caw Studios
Ever dreamed of being part of new product initiatives? Feel the energy and excitement to work on version 1 of a product, and bring the idea on paper to life. Do you crave to work on SAAS products that can become the next Uber or Airbnb or Flipkart? We allow you to be part of a team that will be leading the development of a SAAS product.
Our organization relies on its central engineering workforce to develop and maintain a product portfolio of several different startups. Our product portfolio continuously grows as we incubate more startups, which means that different products are very likely to make use of different technologies, architecture & frameworks - a fun place for smart tech lovers! We are looking for a Software Development Engineer in Test to join one of our engineering teams at our office in Hyderabad.
What would you do?
● Improve automation code structure and framework architecture in terms of maintainability, execution speed, and coverage; write, co-write, and review test design/plan documentation.
● Own communication throughout the sprint/release cycle, quality of features, and delivery of the entire feature.
● Lead new language/framework POC within the technical focus area.
● Drive the design/code review process for test automation, seeking and providing constructive criticism.
● Ensure your team has strong sets of documentation and journals of how their test design and architecture/product evolve.
● Lead effort in working with other teams and counterparts to solve problems affecting the team's overall delivery.
● Participate in the prioritization of cross-team automation initiatives & lead those within your team.
● Participate/Support in production/Customer deployment.
Who Should Apply?
● 2-5 years of experience in professional testing.
● 1+ Experience working with Automation Testing.
● Familiarity with Microservices architecture.
● Deep understanding of Manual and automation test methodologies and principles.
● Strong problem-solving, interpersonal, organizational, and time management skills.
● Passion for self-improvement and continual learning.
● Great attitude and adaptability to taking on many diverse responsibilities.
● Experience working with Web and API Testing - both manual & automation.
● Experience using tools such as Jira, Git, Cypress, Kubernetes, Selenium, Docker, TestNG, CI/CD pipelines, JavaScript &TypeScript testing tools, and API Automation
● Experience in setting up test Infra for Functional and Non-Functional testing.
● Experience working with Agile process management methodology
● Experience in Performance/ Security Testing and Linux/ Unix commands.
About CAW Studios:
CAW Studios is a Product Engineering Company of 50+ super-geeks based out of Hyderabad. We run complete engineering (Dev + DevOps) for several products like Interakt, CashFlo, KaiPulse, and FastBar. We are also part of the global engineering teams for Haptik, EmailAnalytics, SenorPago, and GrowthZone. We are obsessed with automation, DevOps, OOPS, and SOLID. We are not into one tech stack - we are into solving problems.
Know More About CAW Studios:
Find us: https://goo.gl/maps/dvR6L26JUa42 Website: https://www.cawstudios.com/ Software Development And Test Engineer - Cypress Know More: https://www.cawstudios.com/handbook
Aprajita Consultancy
Role: Oracle DBA Developer
Location: Hyderabad
Required Experience: 8 + Years
Skills : DBA, Terraform, Ansible, Python, Shell Script, DevOps activities, Oracle DBA, SQL server, Cassandra, Oracle sql/plsql, MySQL/Oracle/MSSql/Mongo/Cassandra, Security measure configuration
Roles and Responsibilities:
1. 8+ years of hands-on DBA experience in one or many of the following: SQL Server, Oracle, Cassandra
2. DBA experience in a SRE environment will be an advantage.
3. Experience in Automation/building databases by providing self-service tools. analyze and implement solutions for database administration (e.g., backups, performance tuning, Troubleshooting, Capacity planning)
4. Analyze solutions and implement best practices for cloud database and their components.
5. Build and enhance tooling, automation, and CI/CD workflows (Jenkins etc.) that provide safe self-service capabilities to th6. Implement proactive monitoring and alerting to detect issues before they impact users. Use a metrics-driven approach to identify and root cause performance and scalability bottlenecks in the system.
7. Work on automation of database infrastructure and help engineering succeed by providing self-service tools.
8. Write database documentation, including data standards, procedures, and definitions for the data dictionary (metadata)
9. Monitor database performance, control access permissions and privileges, capacity planning, implement changes and apply new patches and versions when required.
10. Recommend query and schema changes to optimize the performance of database queries.
11. Have experience with cloud-based environments (OCI, AWS, Azure) as well as On-Premises.
12. Have experience with cloud database such as SQL server, Oracle, Cassandra
13. Have experience with infrastructure automation and configuration management (Jira, Confluence, Ansible, Gitlab, Terraform)
14. Have excellent written and verbal English communication skills.
15. Planning, managing, and scaling of data stores to ensure a business’ complex data requirements are met and it can easily access its data in a fast, reliable, and safe manner.
16. Ensures the quality of orchestration and integration of tools needed to support daily operations by patching together existing infrastructure with cloud solutions and additional data infrastructures.
17. Data Security and protecting the data through rigorous testing of backup and recovery processes and frequently auditing well-regulated security procedures.
18. use software and tooling to automate manual tasks and enable engineers to move fast without the concern of losing data during their experiments.
19. service level objectives (SLOs), risk analysis to determine which problems to address and which problems to automate.
20. Bachelor's Degree in a technical discipline required.
21. DBA Certifications required: Oracle, SQLServer, Cassandra (2 or more)
21. Cloud, DevOps certifications will be an advantage.
Must have Skills:
Ø Oracle DBA with development
Ø SQL
Ø Devops tools
Ø Cassandra
Fintrac Global services
Required Qualifications:
∙Bachelor’s degree in computer science, Information Technology, or related field, or equivalent experience.
∙5+ years of experience in a DevOps role, preferably for a SaaS or software company.
∙Expertise in cloud computing platforms (e.g., AWS, Azure, GCP).
∙Proficiency in scripting languages (e.g., Python, Bash, Ruby).
∙Extensive experience with CI/CD tools (e.g., Jenkins, GitLab CI, Travis CI).
∙Extensive experience with NGINX and similar web servers.
∙Strong knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes).
∙Familiarity with infrastructure-as-code tools (e.g. Terraform, CloudFormation).
∙Ability to work on-call as needed and respond to emergencies in a timely manner.
∙Experience with high transactional e-commerce platforms.
Preferred Qualifications:
∙Certifications in cloud computing or DevOps are a plus (e.g., AWS Certified DevOps Engineer,
Azure DevOps Engineer Expert).
∙Experience in a high availability, 24x7x365 environment.
∙Strong collaboration, communication, and interpersonal skills.
∙Ability to work independently and as part of a team.
Job Description:-
Design, develop IoT/Cloud-based Typescript/ JavaScript/ Node.JS applications using
Amazon Cloud Computing Services.
Work closely with onsite, offshore, and cross functional teams, Product Management, UI/UX developers, Web and Mobile developers, SQA teams to effectively use technologies to build and deliver high quality and on-time delivery of IoT applications
Bug and issue resolution
Proactively Identify risks and failure modes early in the development lifecycle and develop.
POCs to mitigate the risks early in the program.
Assertive communication and team skills
Primary Skills:
Hands on experience (3+ years) in AWS cloud native environment with work experience in
AWS Lambda, Kinesis, DynamoDB
3+ years’ experience in working with NodeJS, Python, Unit Testing and Git
3+ years in work experience with document, relational or timeseries databases
2+ years in work experience with typescript.
1+ years in IaaS framework like Serverless or CDK with CloudFormation knowledge
Job Description-
Responsibilities:
* Work on real-world computer vision problems
* Write robust industry-grade algorithms
* Leverage OpenCV, Python and deep learning frameworks to train models.
* Use Deep Learning technologies such as Keras, Tensorflow, PyTorch etc.
* Develop integrations with various in-house or external microservices.
* Must have experience in deployment practices (Kubernetes, Docker, containerization, etc.) and model compression practices
* Research latest technologies and develop proof of concepts (POCs).
* Build and train state-of-the-art deep learning models to solve Computer Vision related problems, including, but not limited to:
* Segmentation
* Object Detection
* Classification
* Objects Tracking
* Visual Style Transfer
* Generative Adversarial Networks
* Work alongside other researchers and engineers to develop and deploy solutions for challenging real-world problems in the area of Computer Vision
* Develop and plan Computer Vision research projects, in the terms of scope of work including formal definition of research objectives and outcomes
* Provide specialized technical / scientific research to support the organization on different projects for existing and new technologies
Skills:
* Object Detection
* Computer Science
* Image Processing
* Computer Vision
* Deep Learning
* Artificial Intelligence (AI)
* Pattern Recognition
* Machine Learning
* Data Science
* Generative Adversarial Networks (GANs)
* Flask
* SQL
AWS Glue Developer
Work Experience: 6 to 8 Years
Work Location: Noida, Bangalore, Chennai & Hyderabad
Must Have Skills: AWS Glue, DMS, SQL, Python, PySpark, Data integrations and Data Ops,
Job Reference ID:BT/F21/IND
Job Description:
Design, build and configure applications to meet business process and application requirements.
Responsibilities:
7 years of work experience with ETL, Data Modelling, and Data Architecture Proficient in ETL optimization, designing, coding, and tuning big data processes using Pyspark Extensive experience to build data platforms on AWS using core AWS services Step function, EMR, Lambda, Glue and Athena, Redshift, Postgres, RDS etc and design/develop data engineering solutions. Orchestrate using Airflow.
Technical Experience:
Hands-on experience on developing Data platform and its components Data Lake, cloud Datawarehouse, APIs, Batch and streaming data pipeline Experience with building data pipelines and applications to stream and process large datasets at low latencies.
➢ Enhancements, new development, defect resolution and production support of Big data ETL development using AWS native services.
➢ Create data pipeline architecture by designing and implementing data ingestion solutions.
➢ Integrate data sets using AWS services such as Glue, Lambda functions/ Airflow.
➢ Design and optimize data models on AWS Cloud using AWS data stores such as Redshift, RDS, S3, Athena.
➢ Author ETL processes using Python, Pyspark.
➢ Build Redshift Spectrum direct transformations and data modelling using data in S3.
➢ ETL process monitoring using CloudWatch events.
➢ You will be working in collaboration with other teams. Good communication must.
➢ Must have experience in using AWS services API, AWS CLI and SDK
Professional Attributes:
➢ Experience operating very large data warehouses or data lakes Expert-level skills in writing and optimizing SQL Extensive, real-world experience designing technology components for enterprise solutions and defining solution architectures and reference architectures with a focus on cloud technology.
➢ Must have 6+ years of big data ETL experience using Python, S3, Lambda, Dynamo DB, Athena, Glue in AWS environment.
➢ Expertise in S3, RDS, Redshift, Kinesis, EC2 clusters highly desired.
Qualification:
➢ Degree in Computer Science, Computer Engineering or equivalent.
Salary: Commensurate with experience and demonstrated competence
- 2 to 5 years of experience (or equivalent understanding of software engineering)
- Familiar with one backend language (Node, Go, Java, Python)
- Familiar with Javascript/Typescript and a UI framework
- Willingness and interest in learning new tech/processes (Airflow, AWS, IaaS, etc.)
- 0 to 3 years of experience (or equivalent understanding of software engineering)
- Familiar with Javascript/Typescript and React.js
- Familiar with one backend language (Node, Go, Java, Python) and one relational database (any SQL)
- Willingness and interest in learning new tech/processes (Next.js, Angular, IaaS, etc.)
Position: Technical Architect
Location: Hyderabad
Experience: 6+ years
Job Summary:
We are looking for an experienced Technical Architect with a strong background in Python, Node.js, and React to lead the design and development of complex and scalable software solutions. The ideal candidate will possess exceptional technical skills, a deep understanding of software architecture principles, and a proven track record of successfully delivering high-quality projects. You should be capable of leading a cross-functional team that's responsible for the full software development life cycle, from conception to deployment with Agile methodologies.
Responsibilities:
● Lead the design, development, and deployment of software solutions, ensuring architectural integrity and high performance.
● Collaborate with cross-functional teams, including developers, designers, and product managers, to define technical requirements and create effective solutions.
● Provide technical guidance and mentorship to development teams, ensuring best practices and coding standards are followed.
● Evaluate and recommend appropriate technologies, frameworks, and tools to achieve project goals.
● Drive continuous improvement by staying updated with industry trends, emerging technologies, and best practices.
● Conduct code reviews, identify areas of improvement, and promote a culture of excellence in software development.
● Participate in architectural discussions, making strategic decisions and aligning technical solutions with business objectives.
● Troubleshoot and resolve complex technical issues, ensuring optimal performance and reliability of software applications.
● Collaborate with stakeholders to gather and analyze requirements, translating them into technical specifications.
● Define and enforce architectural patterns, ensuring scalability, security, and maintainability of systems.
● Lead efforts to refactor and optimize existing codebase, enhancing performance and maintainability.
Qualifications:
● Bachelor's degree in Computer Science, Software Engineering, or a related field. Master's degree is a plus.
● Minimum of 8 years of experience in software development with a focus on Python, Node.js, and React.
● Proven experience as a Technical Architect, leading the design and development of complex software systems.
● Strong expertise in software architecture principles, design patterns, and best practices.
● Extensive hands-on experience with Python, Node.js, and React, including designing and implementing scalable applications.
● Solid understanding of microservices architecture, RESTful APIs, and cloud technologies (AWS, GCP, or Azure).
● Extensive knowledge of JavaScript, web stacks, libraries, and frameworks.
● Should create automation test cases and unit test cases (optional)
● Proficiency in database design, optimization, and data modeling.
● Experience with DevOps practices, CI/CD pipelines, and containerization (Docker, Kubernetes).
● Excellent problem-solving skills and the ability to troubleshoot complex technical issues.
● Strong communication skills, both written and verbal, with the ability to effectively interact with cross-functional teams.
● Prior experience in mentoring and coaching development teams.
● Strong leadership qualities with a passion for technology innovation.
● have experience in using Linux-based development environments using GitHub and CI/CD
● Atlassian stack (JIRA/Confluence)
Company Profile :
Merilytics, an Accordion company is a fast-growing analytics firm offering advanced a and intelligent analytical solutions to clients globally. We combine domain expertise, advanced analytics, and technology to provide robust solutions for clients' business problems. You can find further details about the company at https://merilytics.com.
We partner with our clients in Private Equity, CPG, Retail, Healthcare, Media & Entertainment, Technology, Logistics industries etc. by providing analytical solutions to generate superior returns. We solve clients' business problems by analyzing large amount of data to help guide their Operations, Marketing, Pricing, Customer Strategies, and much more.
Position :
- Business Associate at Merilytics will be working on complex analytical projects and is the primary owner of the work streams involved.
- The Business Associates are expected to lead the team of Business Analysts to deliver robust analytical solutions consistently and mentor the Analysts for professional development.
Location : Hyderabad
Roles and Responsibilities :
The roles and responsibilities of a Business Associate will include the below:
- Proactively provide thought leadership to the team and have complete control on the delivery process of the project.
- Understand the client's point of view and translate it into sound judgment calls in ambiguous analytical situations.
- Highlight potential analytical issues upfront and resolve them independently.
- Synthesizes the analysis and derives insights independently.
- Identify the crux of the client problem and leverage it to draw relevant actionable insights from the analysis/work.
- Ability to manage multiple Analysts and provide customized guidance for individual development.
- Resonate with our five core values - Client First, Excellence, Integrity, Respect and Teamwork.
Pre-requisites and skillsets required to apply for this role :
- Undergraduate degree (B.E/B.Tech.) from tier-1/tier-2 colleges are preferred.
- Should have 2-4 years of experience.
- Strong leadership & proactive communication to coordinate with the project team and other internal stakeholders.
- Ability to use business judgement and a structured approach towards solving complex problems.
- Experience in client-facing/professional services environment is a plus.
- Strong hard skills on analytics tools such as R, Python, SQL, and Excel is a plus.
Why Explore a Career at Merilytics :
- High growth environment: Semi-annual performance management and promotion cycles coupled with a strong meritocratic culture, enables fast track to leadership responsibility.
- Cross Domain Exposure: Interesting and challenging work streams across industries and domains that always keep you excited, motivated, and on your toes.
- Entrepreneurial Environment: Intellectual freedom to make decisions and own them. We expect you to spread your wings and assume larger responsibilities.
- Fun culture and peer group: Non-bureaucratic and fun working environment; Strong peer environment that will challenge you and accelerate your learning curve.
Other benefits for full time employees:
(i) Health and wellness programs that include employee health insurance covering immediate family members and parents, term life insurance for employees, free health camps for employees, discounted health services (including vision, dental) for employee and family members, free doctor's consultations, counselors, etc.
(ii) Corporate Meal card options for ease of use and tax benefits.
(iii) Work dinners, team lunches, company sponsored team outings and celebrations.
(iv) Reimbursement support for travel to the office, as and when promulgated by the Company.
(v) Cab reimbursement for women employees beyond a certain time of the day.
(vi) Robust leave policy to support work-life balance. Specially designed leave structure to support woman employees for maternity and related requests.
(vii) Reward and recognition platform to celebrate professional and personal milestones.
(viii) A positive & transparent work environment including various employee engagement and employee benefit initiatives to support personal and professional learning and development.
is a software product company that provides
5+ years of experience designing, developing, validating, and automating ETL processes 3+ years of experience traditional ETL tools such as Visual Studio, SQL Server Management Studio, SSIS, SSAS and SSRS 2+ years of experience with cloud technologies and platforms, such as: Kubernetes, Spark, Kafka, Azure Data Factory, Snowflake, ML Flow, Databricks, Airflow or similar Must have experience with designing and implementing data access layers Must be an expert with SQL/T-SQL and Python Must have experience in Kafka Define and implement data models with various database technologies like MongoDB, CosmosDB, Neo4j, MariaDB and SQL Serve Ingest and publish data from sources and to destinations via an API Exposure to ETL/ELT with using Kafka or Azure Event Hubs with Spark or Databricks is a plus Exposure to healthcare technologies and integrations for FHIR API, HL7 or other HIE protocols is a plus
Skills Required :
Designing, Developing, ETL, Visual Studio, Python, Spark, Kubernetes, Kafka, Azure Data Factory, SQL Server, Airflow, Databricks, T-SQL, MongoDB, CosmosDB, Snowflake, SSIS, SSAS, SSRS, FHIR API, HL7, HIE Protocols
Job Title: System Engineer
Responsibilities include, but are not limited to:
· Understand Mobile SoCs architecture across different Multimedia and Connectivity applications
· Evaluate Memory/Storage architecture on mobile platforms and develop architectures to improve
· Innovate new solutions to complex multi-disciplinary problems by collaborating with other team members
· Identify new mobile workloads that will define memory/storage usage in future products
· Evaluate and present architecture trade-offs impacting memory/storage subsystem performance and power
Successful candidates for this position will have the following:
· Bachelors/Master’s degree in Electronics and/or Computer Engineering or Computer Science or related field
· 4-8 years of work experience in the Mobile ecosystem
· Hands-on experience Power and Performance Analysis
· Good understanding of Mobile SOC system and Android system
· Good understanding and experience on system Benchmarking and multimedia applications
· Good understanding of the Storage subsystem with hands on experience on Host side drivers
Preferred Skills:
· Strong Mobile Platform SW and HW knowledge
· Strong working knowledge of DRAM, NAND, and other memory/storage devices
· Strong understanding of architecture in multimedia and connectivity use cases
· Experience in C++, Python, or other System programming language
We are looking for an experienced Sr.Devops Consultant Engineer to join our team. The ideal candidate should have at least 5+ years of experience.
We are retained by a promising startup located in Silicon valley backed by Fortune 50 firm with veterans from firms as Zscaler, Salesforce & Oracle. Founding team has been part of three unicorns and two successful IPO’s in the past and well funded by Dell Technologies and Westwave Capital. The company has been widely recognized as an industry innovator in the Data Privacy, Security space and being built by proven Cybersecurity executives who have successfully built and scaled high growth Security companies and built Privacy programs as executives.
Responsibilities:
- Develop and maintain infrastructure as code using tools like Terraform, CloudFormation, and Ansible
- Manage and maintain Kubernetes clusters on EKS and EC2 instances
- Implement and maintain automated CI/CD pipelines for microservices
- Optimize AWS costs by identifying cost-saving opportunities and implementing cost-effective solutions
- Implement best security practices for microservices, including vulnerability assessments, SOC2 compliance, and network security
- Monitor the performance and availability of our cloud infrastructure using observability tools such as Prometheus, Grafana, and Elasticsearch
- Implement backup and disaster recovery solutions for our microservices and databases
- Stay up to date with the latest AWS services and technologies and provide recommendations for improving our cloud infrastructure
- Collaborate with cross-functional teams, including developers, and product managers, to ensure the smooth operation of our cloud infrastructure
- Experience with large scale system design and scaling services is highly desirable
Requirements:
- Bachelor's degree in Computer Science, Engineering, or a related field
- At least 5 years of experience in AWS DevOps and infrastructure engineering
- Expertise in Kubernetes management, Docker, EKS, EC2, Queues, Python Threads, Celery Optimization, Load balancers, AWS cost optimizations, Elasticsearch, Container management, and observability best practices
- Experience with SOC2 compliance and vulnerability assessment best practices for microservices
- Familiarity with AWS services such as S3, RDS, Lambda, and CloudFront
- Strong scripting skills in languages like Python, Bash, and Go
- Excellent communication skills and the ability to work in a collaborative team environment
- Experience with agile development methodologies and DevOps practices
- AWS certification (e.g. AWS Certified DevOps Engineer, AWS Certified Solutions Architect) is a plus.
Notice period : Can join within a month
About Quadratyx:
We are a product-centric insight & automation services company globally. We help the world’s organizations make better & faster decisions using the power of insight & intelligent automation. We build and operationalize their next-gen strategy, through Big Data, Artificial Intelligence, Machine Learning, Unstructured Data Processing and Advanced Analytics. Quadratyx can boast more extensive experience in data sciences & analytics than most other companies in India.
We firmly believe in Excellence Everywhere.
Job Description
Purpose of the Job/ Role:
• As a Technical Lead, your work is a combination of hands-on contribution, customer engagement and technical team management. Overall, you’ll design, architect, deploy and maintain big data solutions.
Key Requisites:
• Expertise in Data structures and algorithms.
• Technical management across the full life cycle of big data (Hadoop) projects from requirement gathering and analysis to platform selection, design of the architecture and deployment.
• Scaling of cloud-based infrastructure.
• Collaborating with business consultants, data scientists, engineers and developers to develop data solutions.
• Led and mentored a team of data engineers.
• Hands-on experience in test-driven development (TDD).
• Expertise in No SQL like Mongo, Cassandra etc, preferred Mongo and strong knowledge of relational databases.
• Good knowledge of Kafka and Spark Streaming internal architecture.
• Good knowledge of any Application Servers.
• Extensive knowledge of big data platforms like Hadoop; Hortonworks etc.
• Knowledge of data ingestion and integration on cloud services such as AWS; Google Cloud; Azure etc.
Skills/ Competencies Required
Technical Skills
• Strong expertise (9 or more out of 10) in at least one modern programming language, like Python, or Java.
• Clear end-to-end experience in designing, programming, and implementing large software systems.
• Passion and analytical abilities to solve complex problems Soft Skills.
• Always speaking your mind freely.
• Communicating ideas clearly in talking and writing, integrity to never copy or plagiarize intellectual property of others.
• Exercising discretion and independent judgment where needed in performing duties; not needing micro-management, maintaining high professional standards.
Academic Qualifications & Experience Required
Required Educational Qualification & Relevant Experience
• Bachelor’s or Master’s in Computer Science, Computer Engineering, or related discipline from a well-known institute.
• Minimum 7 - 10 years of work experience as a developer in an IT organization (preferably Analytics / Big Data/ Data Science / AI background.
We are #hiring for AWS Data Engineer expert to join our team
Job Title: AWS Data Engineer
Experience: 5 Yrs to 10Yrs
Location: Remote
Notice: Immediate or Max 20 Days
Role: Permanent Role
Skillset: AWS, ETL, SQL, Python, Pyspark, Postgres DB, Dremio.
Job Description:
Able to develop ETL jobs.
Able to help with data curation/cleanup, data transformation, and building ETL pipelines.
Strong Postgres DB exp and knowledge of Dremio data visualization/semantic layer between DB and the application is a plus.
Sql, Python, and Pyspark is a must.
Communication should be good
- Creating and managing ETL/ELT pipelines based on requirements
- Build PowerBI dashboards and manage datasets needed.
- Work with stakeholders to identify data structures needed for future and perform any transformations including aggregations.
- Build data cubes for real-time visualisation needs and CXO dashboards.
Required Tech Skills
- Microsoft PowerBI & DAX
- Python, Pandas, PyArrow, Jupyter Noteboks, ApacheSpark
- Azure Synapse, Azure DataBricks, Azure HDInsight, Azure Data Factory
Key Responsibilities:
- Work with the development team to plan, execute and monitor deployments
- Capacity planning for product deployments
- Adopt best practices for deployment and monitoring systems
- Ensure the SLAs for performance, up time are met
- Constantly monitor systems, suggest changes to improve performance and decrease costs.
- Ensure the highest standards of security
Key Competencies (Functional):
- Proficiency in coding in atleast one scripting language - bash, Python, etc
- Has personally managed a fleet of servers (> 15)
- Understand different environments production, deployment and staging
- Worked in micro service / Service oriented architecture systems
- Has worked with automated deployment systems – Ansible / Chef / Puppet.
- Can write MySQL queries
Designation: Graphics and Simulation Engineer
Experience: 3-15 Yrs
Position Type: Full Time
Position Location: Hyderabad
Description:
We are looking for engineers to work on applied research problems related to computer graphics in autonomous driving of electric tractors. The team works towards creating a universe of farm environments in which tractors can driver around for the purposes of simulation, synthetic data generation for deep learning training, simulation of edges cases and modelling physics.
Technical Skills:
● Background in OpenGL, OpenCL, graphics algorithms and optimization is necessary.
● Solid theoretical background in computational geometry and computer graphics is desired. Deep learning background is optional.
● Experience in two view and multi-view geometry.
● Necessary Skills: Python, C++, Boost, OpenGL, OpenCL, Unity3D/Unreal, WebGL, CUDA.
● Academic experience for freshers in graphics is also preferred.
● Experienced candidates in Computer Graphics with no prior Deep Learning experience willing to apply their knowledge to vision problems are also encouraged to apply.
● Software development experience on low-power embedded platforms is a plus.
Responsibilities:
● Understanding of engineering principles and a clear understanding of data structures and algorithms.
● Ability to understand, optimize and debug imaging algorithms.
● Ability to drive a project from conception to completion, research papers to code with disciplined approach to software development on Linux platform
● Demonstrate outstanding ability to perform innovative and significant research in the form of technical papers, thesis, or patents.
● Optimize runtime performance of designed models.
● Deploy models to production and monitor performance and debug inaccuracies and exceptions.
● Communicate and collaborate with team members in India and abroad for the fulfillment of your duties and organizational objectives.
● Thrive in a fast-paced environment and have the ability to own the project end to end with minimum hand holding
● Learn & adapt new technologies & skillsets
● Work on projects independently with timely delivery & defect free approach.
● Thesis focusing on the above skill set may be given more preference.
Job Title: Data Engineer
Job Summary: As a Data Engineer, you will be responsible for designing, building, and maintaining the infrastructure and tools necessary for data collection, storage, processing, and analysis. You will work closely with data scientists and analysts to ensure that data is available, accessible, and in a format that can be easily consumed for business insights.
Responsibilities:
- Design, build, and maintain data pipelines to collect, store, and process data from various sources.
- Create and manage data warehousing and data lake solutions.
- Develop and maintain data processing and data integration tools.
- Collaborate with data scientists and analysts to design and implement data models and algorithms for data analysis.
- Optimize and scale existing data infrastructure to ensure it meets the needs of the business.
- Ensure data quality and integrity across all data sources.
- Develop and implement best practices for data governance, security, and privacy.
- Monitor data pipeline performance / Errors and troubleshoot issues as needed.
- Stay up-to-date with emerging data technologies and best practices.
Requirements:
Bachelor's degree in Computer Science, Information Systems, or a related field.
Experience with ETL tools like Matillion,SSIS,Informatica
Experience with SQL and relational databases such as SQL server, MySQL, PostgreSQL, or Oracle.
Experience in writing complex SQL queries
Strong programming skills in languages such as Python, Java, or Scala.
Experience with data modeling, data warehousing, and data integration.
Strong problem-solving skills and ability to work independently.
Excellent communication and collaboration skills.
Familiarity with big data technologies such as Hadoop, Spark, or Kafka.
Familiarity with data warehouse/Data lake technologies like Snowflake or Databricks
Familiarity with cloud computing platforms such as AWS, Azure, or GCP.
Familiarity with Reporting tools
Teamwork/ growth contribution
- Helping the team in taking the Interviews and identifying right candidates
- Adhering to timelines
- Intime status communication and upfront communication of any risks
- Tech, train, share knowledge with peers.
- Good Communication skills
- Proven abilities to take initiative and be innovative
- Analytical mind with a problem-solving aptitude
Good to have :
Master's degree in Computer Science, Information Systems, or a related field.
Experience with NoSQL databases such as MongoDB or Cassandra.
Familiarity with data visualization and business intelligence tools such as Tableau or Power BI.
Knowledge of machine learning and statistical modeling techniques.
If you are passionate about data and want to work with a dynamic team of data scientists and analysts, we encourage you to apply for this position.
Position: ETL Developer
Location: Mumbai
Exp.Level: 4+ Yrs
Required Skills:
* Strong scripting knowledge such as: Python and Shell
* Strong relational database skills especially with DB2/Sybase
* Create high quality and optimized stored procedures and queries
* Strong with scripting language such as Python and Unix / K-Shell
* Strong knowledge base of relational database performance and tuning such as: proper use of indices, database statistics/reorgs, de-normalization concepts.
* Familiar with lifecycle of a trade and flows of data in an investment banking operation is a plus.
* Experienced in Agile development process
* Java Knowledge is a big plus but not essential
* Experience in delivery of metrics / reporting in an enterprise environment (e.g. demonstrated experience in BI tools such as Business Objects, Tableau, report design & delivery) is a plus
* Experience on ETL processes and tools such as Informatica is a plus. Real time message processing experience is a big plus.
* Good team player; Integrity & ownership
From building entire infrastructures or platforms to solving complex IT challenges, Cambridge Technology helps businesses accelerate their digital transformation and become AI-first businesses. With over 20 years of expertise as a technology services company, we enable our customers to stay ahead of the curve by helping them figure out the perfect approach, solutions, and ecosystem for their business. Our experts help customers leverage the right AI, big data, cloud solutions, and intelligent platforms that will help them become and stay relevant in a rapidly changing world.
No Of Positions: 1
Skills required:
- The ideal candidate will have a bachelor’s degree in data science, statistics, or a related discipline with 4-6 years of experience, or a master’s degree with 4-6 years of experience. A strong candidate will also possess many of the following characteristics:
- Strong problem-solving skills with an emphasis on achieving proof-of-concept
- Knowledge of statistical techniques and concepts (regression, statistical tests, etc.)
- Knowledge of machine learning and deep learning fundamentals
- Experience with Python implementations to build ML and deep learning algorithms (e.g., pandas, numpy, sci-kit-learn, Stats Models, Keras, PyTorch, etc.)
- Experience writing and debugging code in an IDE
- Experience using managed web services (e.g., AWS, GCP, etc.)
- Strong analytical and communication skills
- Curiosity, flexibility, creativity, and a strong tolerance for ambiguity
- Ability to learn new tools from documentation and internet resources.
Roles and responsibilities :
- You will work on a small, core team alongside other engineers and business leaders throughout Cambridge with the following responsibilities:
- Collaborate with client-facing teams to design and build operational AI solutions for client engagements.
- Identify relevant data sources for data wrangling and EDA
- Identify model architectures to use for client business needs.
- Build full-stack data science solutions up to MVP that can be deployed into existing client business processes or scaled up based on clear documentation.
- Present findings to teammates and key stakeholders in a clear and repeatable manner.
Experience :
2 - 14 Yrs
Designation: Perception Engineer (3D)
Experience: 0 years to 8 years
Position Type: Full Time
Position Location: Hyderabad
Compensation: As Per Industry standards
About Monarch:
At Monarch, we’re leading the digital transformation of farming. Monarch Tractor augments both muscle and mind with fully loaded hardware, software, and service machinery that will spur future generations of farming technologies.
With our farmer-first mentality, we are building a smart tractor that will enhance (not replace) the existing farm ecosystem, alleviate labor availability, and cost issues, and provide an avenue for competitive organic and beyond farming by providing mechanical solutions to replace harmful chemical solutions. Despite all the cutting-edge technology we will incorporate, our tractor will still plow, still, and haul better than any other tractor in its class. We have all the necessary ingredients to develop, build and scale the Monarch Tractor and digitally transform farming around the world.
Description:
We are looking for engineers to work on applied research problems related to perception in autonomous driving of electric tractors. The team works on classical and deep learning-based techniques for computer vision. Several problems like SFM, SLAM, 3D Image processing, multiple view geometry etc. Are being solved to deploy on resource constrained hardware.
Technical Skills:
- Background in Linear Algebra, Probability and Statistics, graphical algorithms and optimization problems is necessary.
- Solid theoretical background in 3D computer vision, computational geometry, SLAM and robot perception is desired. Deep learning background is optional.
- Knowledge of some numerical algorithms or libraries among: Bayesian filters, SLAM, Eigen, Boost, g2o, PCL, Open3D, ICP.
- Experience in two view and multi-view geometry.
- Necessary Skills: Python, C++, Boost, Computer Vision, Robotics, OpenCV.
- Academic experience for freshers in Vision for Robotics is preferred.
- Experienced candidates in Robotics with no prior Deep Learning experience willing to apply their knowledge to vision problems are also encouraged to apply.
- Software development experience on low-power embedded platforms is a plus.
Responsibilities:
- Understanding engineering principles and a clear understanding of data structures and algorithms.
- Ability to understand, optimize and debug imaging algorithms.
- Ability to drive a project from conception to completion, research papers to code with disciplined approach to software development on Linux platform.
- Demonstrate outstanding ability to perform innovative and significant research in the form of technical papers, thesis, or patents.
- Optimize runtime performance of designed models.
- Deploy models to production and monitor performance and debug inaccuracies and exceptions.
- Communicate and collaborate with team members in India and abroad for the fulfillment of your duties and organizational objectives.
- Thrive in a fast-paced environment and can own the project end to end with minimum hand holding.
- Learn & adapt to new technologies & skillsets.
- Work on projects independently with timely delivery & defect free approach.
- Thesis focusing on the above skill set may be given more preference.
What you will get:
At Monarch Tractor, you’ll play a key role on a capable, dedicated, high-performing team of rock stars. Our compensation package includes a competitive salary, excellent health benefits commensurate with the role you’ll play in our success.
We are looking for a Technical Program Manager with at least 12 years of experience managing the planning, execution, and delivery of complex technical projects or programs. You will ensure that technical projects are completed within agreed-upon timelines, budgets, and quality standards, and shall play a critical role in ensuring the successful delivery of complex technical projects and programs.
No of Positions 1
Skills Required
- 12+ years experience with development background and excellent communication skills
- Should have strong project management and people skills along with technology skills (primarily web based application development, but can be any)
- Program Planning: Define the scope and objectives of the program and develop a detailed project plan, including schedules, budgets, and resource requirements.
- Risk Management: Manage program risks, develops mitigation plans, and communicates risk status to stakeholders.
- Stakeholder Management: Work closely with stakeholders to ensure that program objectives are aligned with business goals and objectives.
- Communication: Communicate program status, risks, and issues to stakeholders and senior management.
- Project Management: Manage the day-to-day \ activities of the project team, ensuring that tasks are completed on time and within budget.
- Quality Assurance: Ensure that the program meets the required quality standards.
- Resource Management: Manage program resources, including staffing, budget, and equipment.
- Change Management: Manage changes to the program scope, schedule, and budget and ensures that stakeholders are informed and aligned with the changes.
- Technical Expertise: You will have the strong technical expertise to understand the technical aspects of the program and provide guidance to the project team.
- Continuous Improvement: Continuously monitor and evaluate program performance and identify opportunities for improvement.
- Collaboration: Collaborate effectively with cross-functional teams, including software engineers, product managers, and other stakeholders. You must be able to build and maintain strong relationships with team members and stakeholders to ensure the project's success.
Roles & Responsibilities:
- Supervise, plan, and manage the stages of product development, approving the product at each stage.
- For maximum efficiency, create detailed project plans at each stage of product development.
- Maintain a steady flow of ideas and solutions during product development initiatives in order to introduce innovation and improve operational efficiency.
- Keep track of key metrics and create reports to communicate development progress to senior management, product managers, and cross-functional stakeholders.
- Work with a variety of teams, such as software architects, software engineers, system engineers, developers, and product teams.
- Determine cross-team dependencies and include them in the program planning process.
- Determine how to solve technical problems by diagnosing them and suggesting potential solutions.
- Ensure that product development and delivery can be completed within the product budget and timeframe.
- Oversee the product deployment process and, if necessary, assist with the integration process.
- Make necessary changes to product development processes based on performance metrics.
- Stay current on the latest developments in our product category and industry.
- Ensure complete compliance with industry standards by documenting all processes and adjusting them as needed.
- Manage project escalations and assist in the formation of project teams as needed.
- Maintain and implement project plans within the organization necessary technical programs to assist product management teams.
Experience: 12+Years
Location: Hyderabad
Job Responsibilities
✓ Perform the role of Technical Lead on assigned projects to drive solution design (especially backend) and API services development.
✓ Be the thought leader and champion for above mentioned technologies.
✓ Drive technical analysis for new projects including planning and driving proof-of-concepts, if needed.
✓ Drive tasks related to backend development by providing architectural and technical leadership to mid-tier and database developers.
✓ Conduct peer reviews as the lead into Git to confirmed that developed code meets acceptable standards and guidelines.
✓ Work closely with the rest of the leads, mid-tier development, front-end developers, database developers, etc. to ensure end-to-end integrity of the solution being developed.
✓ Work closely with the rest of the tech leads and senior engineering leadership team to ensure reuse where applicable to increase productivity and throughput.
✓ Conduct technical interview to staff open positions in the backend team.
✓ Delegate work and assignments to team members
✓ Collaborate with their team to identify and fix technical problems
✓ Analyze users' needs and then finding applications to serve them
✓ Drive assigned tasks related to SOC 2 certification and ensure compliance to defined controls for areas under lead’s purview.
✓ Guiding their team through technical issues and challenges
✓ Prepare Technical design documents which would help the team to understand the technical flow
✓ Active participation in customer calls especially discussions related to Technical/Architectural and provide inputs.
Required Experience:
✓ Backend Lead around 14 years of experience
✓ Server less Computing Architecture
✓ NodeJS, MySQL, Jenkins, Python, GitLab Technologies
✓ Good knowledge of AWS Cloud
✓ Full cycle AWS implementation experience
✓ Project experience in development and maintenance support for AWS web service and Cloudbased implementations
✓ Experience leading teams of up to 10 + professionals
Ability to manage multiple tasks and projects in a fast-moving environment
Educational Qualifications:
Engineering graduate or B. Tech/MCA with relevant major subjects like Computer Science
Job Description:
Responsibilities:
- Define standards and quality metrics
- Putting testing strategies and practices in place
- Leading the team of test engineers, training them on functional and non-functional needs
- Reporting of status, defining risks contingencies, plans and escalations
- Ensure that several testing and validation processes are improved continuously.
- Ensure several quality improvement tools like code coverage, memory leaks are part of the development cycle.
Requirements and skills:
- Experience with Linux environments
- Experience with Cypress
- Strong programming skills in Python
- Experience in using Selenium Webdriver for test automation
- Able to follow the agile process for the whole life cycle of testing.
- Expert-level knowledge of Jira bug tracking and planning tool
Job Description:
Responsibilities:
- Completing all tasks set by the supervisor and assisting wherever possible.
- Observing existing strategies, techniques of coding, debugging, testing and adopting to the same
- Ability to maintain composure under pressure
- Ability to work in a team.
- Good observation skills and a willingness to learn.
Skills:
- Proficiency in data structures and algorithms
- Good problem solving and analytical thinking skills
- Knowledge of Linux systems
- Python coding knowledge
- Knowledge of object-oriented programming
- Good verbal and written communication skills.
Requisition Raised by:
Engineering Director