50+ Shell Scripting Jobs in India
Apply to 50+ Shell Scripting Jobs on CutShort.io. Find your next job, effortlessly. Browse Shell Scripting Jobs and apply today!
Job Title: Site Reliability Engineer (SRE) / Application Support Engineer
Experience: 3–7 Years
Location: Bangalore / Mumbai / Pune
About the Role
The successful candidate will join the S&C Site Reliability Engineering (SRE) Team, responsible for providing Tier 2/3 support to S&C business applications and environments. This role requires close collaboration with client-facing teams (Client Services, Product, and Research) as well as Infrastructure, Technology, and Application Development teams to maintain and support production and non-production environments.
Key Responsibilities
- Provide Tier 2/3 product technical support and issue resolution.
- Develop and maintain software tools to improve operations and support efficiency.
- Manage system and software configurations; troubleshoot environment-related issues.
- Identify opportunities to optimize system performance through configuration improvements or development suggestions.
- Plan, document, and deploy software applications across Unix/Linux, Azure, and GCP environments.
- Collaborate with Development and QA teams throughout the software release lifecycle.
- Analyze and improve release and deployment processes to drive automation and efficiency.
- Coordinate with infrastructure teams for maintenance, planned downtimes, and resource management across production and non-production environments.
- Participate in on-call support (minimum one week per month) for off-hour emergencies and maintenance activities.
Required Skills & Qualifications
- Education:
- Bachelor’s degree in Computer Science, Engineering, or a related field (BE/MCA).
- Master’s degree is a plus.
- Experience:
- 3–7 years in Production Support, Application Management, or Application Development (support/maintenance).
- Technical Skills:
- Strong Unix/Linux administration skills.
- Excellent scripting skills — Shell, Python, Batch (mandatory).
- Database expertise — Oracle (must have).
- Understanding of Software Development Life Cycle (SDLC).
- PowerShell knowledge is a plus.
- Experience in Java or Ruby development is desirable.
- Exposure to cloud platforms (GCP, Azure, or AWS) is an added advantage.
- Soft Skills:
- Excellent problem-solving and troubleshooting abilities.
- Strong collaboration and communication skills.
- Ability to work in a fast-paced, cross-functional environment.
About Nuware
NuWare is a global technology and IT services company built on the belief that organizations require transformational strategies to scale, grow and build into the future owing to a dynamically evolving ecosystem. We strive towards our clients’ success in today’s hyper-competitive market by servicing their needs with next-gen technologies - AI/ML, NLP, chatbots, digital and automation tools.
We empower businesses to enhance their competencies, processes and technologies to fully leverage opportunities and accelerate impact. Through our focus on market differentiation and innovation - we offer services that are agile, streamlined, efficient and customer-centric.
Headquartered in Iselin, NJ, NuWare has been creating business value and generating growth opportunities for clients through its network of partners, global resources, highly skilled talent and SME’s for 25 years. NuWare is technology agnostic and offers services for Systems Integration, Cloud, Infrastructure Management, Mobility, Test automation, Data Sciences and Social & Big Data Analytics.
Skills Required
- Automation testing with UFT, strong into SQL, Good communication skills
- 5 years of experience in automation testing
- Experience with UFT for at least 3 years
- Good knowledge of VB Scripting
- Knowledge of Manual testing
- Knowledge of automation frameworks
Required skills and experience
• Bachelor's degree, Computer Science, Engineering or other similar concentration (BE/MCA)
• Master’s degree a plus
• 3-8 years’ experience in Production Support/ Application Management/ Application Development (support/ maintenance) role.
• Excellent problem-solving/troubleshooting skills, fast learner
• Strong knowledge of Unix Administration.
• Strong scripting skills in Shell, Python, Batch is must.
• Strong Database experience – Oracle
• Strong knowledge of Software Development Life Cycle
• Power shell is nice to have
• Software development skillsets in Java or Ruby.
• Worked upon any of the cloud platforms – GCP/Azure/AWS is nice to have
Job Description
Position - Full stack Developer
Location - Mumbai
Experience - 2-5 Years
Who are we
Based out of IIT Bombay, HaystackAnalytics is a HealthTech company creating clinical genomics products, which enable diagnostic labs and hospitals to offer accurate and personalized diagnostics. Supported by India's most respected science agencies (DST, BIRAC, DBT), we created and launched a portfolio of products to offer genomics in infectious diseases. Our genomics based diagnostic solution for Tuberculosis was recognized as one of top innovations supported by BIRAC in the past 10 years, and was launched by the Prime Minister of India in the BIRAC Showcase event in Delhi, 2022.
Objectives of this Role:
- Work across the full stack, building highly scalable distributed solutions that enable positive user experiences and measurable business growth
- Ideate and develop new product features in collaboration with domain experts in healthcare and genomics
- Develop state of the art enterprise standard front-end and backend services
- Develop cloud platform services based on container orchestration platform
- Continuously embrace automation for repetitive tasks
- Ensure application performance, uptime, and scale, maintaining high standards of code quality by using clean coding principles and solid design patterns
- Build robust tech modules that are Unit Testable, Automating recurring tasks and processes
- Engage effectively with team members and collaborate to upskill and unblock each other
Frontend Skills
- HTML 5
- CSS framework ( LESS/ SASS / Tailwind )
- Es6 / Typescript
- Electron app /Tauri)
- Component library ( Bootstrap , material UI, Lit )
- Responsive web layout ( Flex layout , Grid layout )
- Package manager --> yarn-/ npm / turbo
- Build tools - > (Vite/Webpack/Parcel)
- Frameworks -- > React with Redux of Mobx / Next JS
- Design patterns
- Testing - JEST / MOCHA / JASMINE / Cypress)
- Functional Programming concepts
- Scripting ( powershell , bash , python )
Backend Skills
- Nodejs - Express / NEST JS
- Python / Rust
- REST API
- SOLID Design Principles
- Database (postgresql / mysql / redis / cassandra / mongodb )
- Caching ( Redis )
- Container Technology ( Docker / Kubernetes )
- Cloud ( azure , aws , openshift, google cloud)
- Version Control - GIT
- GITOPS
- Automation ( terraform , ansible )
Cloud Skills
- Object storage
- VPC concepts
- Containerize Deployment
- Serverless architecture
Other Skills
- Innovation and thought leadership
- UI - UX design skills
- Interest in in learning new tools, languages, workflows, and philosophies to grow
- Communication
To know more about us- https://haystackanalytics.in/
Dear Candidate,
Greetings from Wissen Technology.
We have an exciting Job opportunity for GCP SRE Engineer Professionals. Please refer to the Job Description below and share your profile if interested.
About Wissen Technology:
- The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.
- Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.
- Our workforce consists of 1000+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.
- Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.
- Globally present with offices US, India, UK, Australia, Mexico, and Canada.
- We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
- Wissen Technology has been certified as a Great Place to Work®.
- Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.
- Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.
- The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.
We have served client across sectors like Banking, Telecom, Healthcare, Manufacturing, and Energy. They include likes of Morgan Stanley, MSCI, State Street Corporation, Flipkart, Swiggy, Trafigura, GE to name a few.
Job Description:
Please find below details:
Experience - 4+ Years
Location- Bangalore/Mumbai/Pune
Team Responsibilities
The successful candidate shall be part of the S&C – SRE Team. Our team provides a tier 2/3 support to S&C Business. This position involves collaboration with the client facing teams like Client Services, Product and Research teams and Infrastructure/Technology and Application development teams to perform Environment and Application maintenance and support.
Resource's key Responsibilities
• Provide Tier 2/3 product technical support.
• Building software to help operations and support activities.
• Manage system\software configurations and troubleshoot environment issues.
• Identify opportunities for optimizing system performance through changes in configuration or suggestions for development.
• Plan, document and deploy software applications on our Unix/Linux/Azure and GCP based systems.
• Collaborate with development and software testing teams throughout the release process.
• Analyze release and deployment processes to identify key areas for automation and optimization.
• Manage hardware and software resources & coordinate maintenance, planned downtimes with
infrastructure group across all the environments. (Production / Non-Production).
• Must spend minimum one week a month as on call support to help with off-hour emergencies and maintenance activities.
Required skills and experience
• Bachelor's degree, Computer Science, Engineering or other similar concentration (BE/MCA)
• Master’s degree a plus
• 6-8 years’ experience in Production Support/ Application Management/ Application Development (support/ maintenance) role.
• Excellent problem-solving/troubleshooting skills, fast learner
• Strong knowledge of Unix Administration.
• Strong scripting skills in Shell, Python, Batch is must.
• Strong Database experience – Oracle
• Strong knowledge of Software Development Life Cycle
• Power shell is nice to have
• Software development skillsets in Java or Ruby.
• Worked upon any of the cloud platforms – GCP/Azure/AWS is nice to have
Dear Candidate,
Greetings from Wissen Technology.
We have an exciting Job opportunity for GCP SRE Engineer Professionals. Please refer to the Job Description below and share your profile if interested.
About Wissen Technology:
- The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.
- Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.
- Our workforce consists of 1000+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.
- Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.
- Globally present with offices US, India, UK, Australia, Mexico, and Canada.
- We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
- Wissen Technology has been certified as a Great Place to Work®.
- Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.
- Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.
- The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.
We have served client across sectors like Banking, Telecom, Healthcare, Manufacturing, and Energy. They include likes of Morgan Stanley, MSCI, State Street Corporation, Flipkart, Swiggy, Trafigura, GE to name a few
Job Description:
Please find below details:
Experience - 4+ Years
Location- Bangalore/Mumbai/Pune
Team Responsibilities
The successful candidate shall be part of the S&C – SRE Team. Our team provides a tier 2/3 support to S&C Business. This position involves collaboration with the client facing teams like Client Services, Product and Research teams and Infrastructure/Technology and Application development teams to perform Environment and Application maintenance and support.
Resource's key Responsibilities
• Provide Tier 2/3 product technical support.
• Building software to help operations and support activities.
• Manage system\software configurations and troubleshoot environment issues.
• Identify opportunities for optimizing system performance through changes in configuration or suggestions for development.
• Plan, document and deploy software applications on our Unix/Linux/Azure and GCP based systems.
• Collaborate with development and software testing teams throughout the release process.
• Analyze release and deployment processes to identify key areas for automation and optimization.
• Manage hardware and software resources & coordinate maintenance, planned downtimes with
infrastructure group across all the environments. (Production / Non-Production).
• Must spend minimum one week a month as on call support to help with off-hour emergencies and maintenance activities.
Required skills and experience
• Bachelor's degree, Computer Science, Engineering or other similar concentration (BE/MCA)
• Master’s degree a plus
• 6-8 years’ experience in Production Support/ Application Management/ Application Development (support/ maintenance) role.
• Excellent problem-solving/troubleshooting skills, fast learner
• Strong knowledge of Unix Administration.
• Strong scripting skills in Shell, Python, Batch is must.
• Strong Database experience – Oracle
• Strong knowledge of Software Development Life Cycle
• Power shell is nice to have
• Software development skillsets in Java or Ruby.
• Worked upon any of the cloud platforms – GCP/Azure/AWS is nice to have
🚀 Hiring: PL/SQL Developer
⭐ Experience: 5+ Years
📍 Location: Pune
⭐ Work Mode:- Hybrid
⏱️ Notice Period: Immediate Joiners
(Only immediate joiners & candidates serving notice period)
What We’re Looking For:
☑️ Hands-on PL/SQL developer with strong database and scripting skills, ready to work onsite and collaborate with cross-functional financial domain teams.
Key Skills:
✅ Must Have: PL/SQL, SQL, Databases, Unix/Linux & Shell Scripting
✅ Nice to Have: DevOps tools (Jenkins, Artifactory, Docker, Kubernetes),
✅AWS/Cloud, Basic Python, AML/Fraud/Financial domain, Actimize (AIS/RCM/UDM)
Skills and competencies:
Required:
· Strong analytical skills in conducting sophisticated statistical analysis using bureau/vendor data, customer performance
Data and macro-economic data to solve business problems.
· Working experience in languages PySpark & Scala to develop code to validate and implement models and codes in
Credit Risk/Banking
· Experience with distributed systems such as Hadoop/MapReduce, Spark, streaming data processing, cloud architecture.
- Familiarity with machine learning frameworks and libraries (like scikit-learn, SparkML, tensorflow, pytorch etc.
- Experience in systems integration, web services, batch processing
- Experience in migrating codes to PySpark/Scala is big Plus
- The ability to act as liaison conveying information needs of the business to IT and data constraints to the business
applies equal conveyance regarding business strategy and IT strategy, business processes and work flow
· Flexibility in approach and thought process
· Attitude to learn and comprehend the periodical changes in the regulatory requirement as per FED
Job Overview :
We are looking for an experienced PL/SQL Developer to join our Professional Services team. The role involves developing and configuring enterprise-grade solutions, supporting clients during testing, and collaborating with internal teams. Candidates with strong expertise in PL/SQL and Unix/Linux are preferred, along with exposure to cloud, DevOps, or financial domains.
Key Responsibilities
- Develop and configure software features as per design specifications and enterprise standards.
- Interact with clients to resolve technical queries and support User Acceptance Testing (UAT).
- Collaborate with internal R&D, Professional Services, and Customer Support teams.
- Occasionally work at client sites or across different time zones.
- Ensure secure, scalable, and high-quality code.
Must-Have Skills
- Strong hands-on experience in PL/SQL, SQL, and Databases (Oracle, MS-SQL, MySQL, Postgres, MongoDB).
- Proficiency in Unix/Linux commands and shell scripting.
Nice-to-Have Skills
- Basic understanding of DevOps tools (Jenkins, Artifactory, Docker, Kubernetes).
- Exposure to Cloud environments (AWS preferred).
- Awareness of Python programming.
- Experience in AML, Fraud, or Financial Markets domain.
- Knowledge of Actimize (AIS/RCM/UDM).
Education & Experience
- Bachelor’s degree in Computer Science, Engineering, or equivalent.
- 4–8 years of overall IT experience, with 4+ years in software development.
- Strong problem-solving, communication, and customer interaction skills.
- Ability to work independently in time-sensitive environments.
Shift timings : Afternoon
Job Summary
We are seeking an experienced Senior Java Developer with strong expertise in legacy system migration, server management, and deployment. The candidate will be responsible for maintaining, enhancing, and migrating an existing Java/JSF (PrimeFaces), EJB, REST API, and SQL Server-based application to a modern Spring Boot architecture. The role involves ensuring smooth production deployments, troubleshooting server issues, and optimizing the existing infrastructure.
Key Responsibilities
● Maintain & Enhance the existing Java, JSF (PrimeFaces), EJB, REST API, andSQL Server application.
● Migrate the legacy system to Spring Boot while ensuring minimal downtime.
● Manage deployments using Ansible, GlassFish/Payara, and deployer.sh scripts.
● Optimize and troubleshoot server performance (Apache, Payara, GlassFish).
● Handle XML file generation, email integrations, and REST API maintenance.
● Database management (SQL Server) including query optimization and schema updates.
● Collaborate with teams to ensure smooth transitions during migration.
● Automate CI/CD pipelines using Maven, Ansible, and shell scripts.
● Document migration steps, deployment processes, and system architecture.
Required Skills & Qualifications
● 8+ years of hands-on experience with Java, JSF (PrimeFaces), EJB, and REST APIs.
● Strong expertise in Spring Boot (migration experience from legacy Java is a must).
● Experience with Payara/GlassFish server management and deployment.
● Proficient in Apache, Ansible, and shell scripting (deployer.sh).
● Solid knowledge of SQL Server (queries, stored procedures, optimization).
● Familiarity with XML processing, email integrations, and Maven builds.
● Experience in production deployments, server troubleshooting, and performance tuning.
● Ability to work independently and lead migration efforts.
Preferred Skills
● Knowledge of microservices architecture (helpful for modernization).
● Familiarity with cloud platforms (AWS/Azure) is a plus.
Role: DevOps Engineer
Exp: 4 - 7 Years
CTC: up to 28 LPA
Key Responsibilities
• Design, build, and manage scalable infrastructure on cloud platforms (GCP, AWS, Azure, or OCI)
• Administer and optimize Kubernetes clusters and container runtimes (Docker, containerd)
• Develop and maintain CI/CD pipelines for multiple services and environments
• Manage infrastructure as code using tools like Terraform and/or Pulumi
• Automate operations with Python and shell scripting for deployment, monitoring, and maintenance
• Ensure high availability and performance of production systems and troubleshoot incidents effectively
• Monitor system metrics and implement observability best practices using tools like Prometheus, Grafana, ELK, etc.
• Collaborate with development, security, and product teams to align infrastructure with business needs
• Apply best practices in cloud networking, Linux administration, and configuration management
• Support compliance and security audits; assist with implementation of cloud security measures (e.g., firewalls, IDS/IPS, IAM hardening)
• Participate in on-call rotations and incident response activities
If Interested kindly share your updated resume on 82008 31681
Job Title : Senior DevOps Engineer
Experience : 5+ Years
Location : Gurgaon, Sector 39
About the Role :
We are seeking an experienced Senior DevOps Engineer to lead our DevOps practices, manage a small team, and build functional, scalable systems that enhance customer experience. You will be responsible for deployments, automation, troubleshooting, integrations, monitoring, and team mentoring while ensuring secure and efficient operations.
Mandatory Skills :
Linux Administration, Shell Scripting, CI/CD (Jenkins), Git/GitHub, Docker, Kubernetes, AWS, Ansible, Database Administration (MariaDB/MySQL/MongoDB), Apache httpd/Tomcat, HAProxy, Nagios, Keepalived, Monitoring/Logging/Alerting, and On-premise Server Management.
Key Responsibilities :
- Implement and manage integrations as per business and customer needs.
- Deploy product updates, fixes, and enhancements.
- Provide Level 2 technical support and resolve production issues.
- Build tools to reduce errors and improve system performance.
- Develop scripts and automation for CI/CD, monitoring, and visualization.
- Perform root cause analysis of incidents and implement long-term fixes.
- Ensure robust monitoring, logging, and alerting systems are in place.
- Manage on-premise servers and ensure smooth deployments.
- Collaborate with development teams for system integration.
- Mentor and guide a team of 3 to 4 engineers.
Required Qualifications & Experience :
- Bachelor’s degree in Computer Science, Software Engineering, IT, or related field (Master’s preferred).
- 5+ years of experience in DevOps engineering with team management exposure.
- Strong expertise in:
- Linux Administration & Shell Scripting
- CI/CD pipelines (Jenkins or similar)
- Git/GitHub, branching, and code repository standards
- Docker, Kubernetes, AWS, Ansible
- Database administration (MariaDB, MySQL, MongoDB)
- Web servers (Apache httpd, Apache Tomcat)
- Networking & Load Balancing tools (HAProxy, Keepalived)
- Monitoring & alerting tools (Nagios, logging frameworks)
- On-premise server management
- Strong debugging, automation, and system troubleshooting skills.
- Knowledge of security best practices including data encryption.
Personal Attributes :
- Excellent problem-solving and analytical skills.
- Strong communication and leadership abilities.
- Detail-oriented with a focus on reliability and performance.
- Ability to mentor juniors and collaborate with cross-functional teams.
- Keen interest in emerging DevOps and cloud trends.
Python Developer
Location: Hyderabad (Apple Office)
Experience: 8+ years (Retail / E-commerce preferred)
Budget- 1.9 lpm + GST
Contract: 1 Year + Extendable
Job Responsibilities / Requirements:
- 8+ years of proven experience, preferably in retail or e-commerce environments.
- Strong expertise in Python development.
- Excellent communication skills with the ability to collaborate across multiple teams.
- Hands-on experience with Container & Orchestration: Kubernetes, Docker.
- Expertise in Infrastructure Automation via Kubernetes YAML configurations.
- Strong skills in Scripting & Automation: Python, Shell Scripts (Bash).
- Familiarity with CI/CD Pipelines: GitHub Actions, Jenkins.
- Experience with Monitoring & Logging: Splunk, Grafana.
- Immediate Joiners Preferred – Urgent Support Required.
Python Developer
Location: Hyderabad (Apple Office)
Experience: 8+ years (Retail / E-commerce preferred)
Budget- 1.9 lpm + GST
Contract: 1 Year + Extendable
Job Responsibilities / Requirements:
- 8+ years of proven experience, preferably in retail or e-commerce environments.
- Strong expertise in Python development.
- Excellent communication skills with the ability to collaborate across multiple teams.
- Hands-on experience with Container & Orchestration: Kubernetes, Docker.
- Expertise in Infrastructure Automation via Kubernetes YAML configurations.
- Strong skills in Scripting & Automation: Python, Shell Scripts (Bash).
- Familiarity with CI/CD Pipelines: GitHub Actions, Jenkins.
- Experience with Monitoring & Logging: Splunk, Grafana.
- Immediate Joiners Preferred – Urgent Support Required.
Job Summary:
We are seeking a highly skilled and proactive DevOps Engineer with 4+ years of experience to join our dynamic team. This role requires strong technical expertise across cloud infrastructure, CI/CD pipelines, container orchestration, and infrastructure as code (IaC). The ideal candidate should also have direct client-facing experience and a proactive approach to managing both internal and external stakeholders.
Key Responsibilities:
- Collaborate with cross-functional teams and external clients to understand infrastructure requirements and implement DevOps best practices.
- Design, build, and maintain scalable cloud infrastructure on AWS (EC2, S3, RDS, ECS, etc.).
- Develop and manage infrastructure using Terraform or CloudFormation.
- Manage and orchestrate containers using Docker and Kubernetes (EKS).
- Implement and maintain CI/CD pipelines using Jenkins or GitHub Actions.
- Write robust automation scripts using Python and Shell scripting.
- Monitor system performance and availability, and ensure high uptime and reliability.
- Execute and optimize SQLqueries for MSSQL and PostgresQL databases.
- Maintain clear documentation and provide technical support to stakeholders and clients.
Required Skills:
- Minimum 4+ years of experience in a DevOps or related role.
- Proven experience in client-facing engagements and communication.
- Strong knowledge of AWS services – EC2, S3, RDS, ECS, etc.
- Proficiency in Infrastructure as Code using Terraform or CloudFormation.
- Hands-on experience with Docker and Kubernetes (EKS).
- Strong experience in setting up and maintaining CI/CD pipelines with Jenkins or GitHub.
- Solid understanding of SQL and working experience with MSSQL and PostgreSQL.
- Proficient in Python and Shell scripting.
Preferred Qualifications:
- AWS Certifications (e.g., AWS Certified DevOps Engineer) are a plus.
- Experience working in Agile/Scrum environments.
- Strong problem-solving and analytical skills.
Must Have -
a. Background working with Startups
b. Good knowledge of Kubernetes & Docker
c. Background working in Azure
What you’ll be doing
- Ensure that our applications and environments are stable, scalable, secure and performing as expected.
- Proactively engage and work in alignment with cross-functional colleagues to understand their requirements, contributing to and providing suitable supporting solutions.
- Develop and introduce systems to aid and facilitate rapid growth including implementation of deployment policies, designing and implementing new procedures, configuration management and planning of patches and for capacity upgrades
- Observability: ensure suitable levels of monitoring and alerting are in place to keep engineers aware of issues.
- Establish runbooks and procedures to keep outages to a minimum. Jump in before users notice that things are off track, then automate it for the future.
- Automate everything so that nothing is ever done manually in production.
- Identify and mitigate reliability and security risks. Make sure we are prepared for peak times,
- DDoS attacks and fat fingers.
- Troubleshoot issues across the whole stack - software, applications and network.
- Manage individual project priorities, deadlines, and deliverables as part of a self-organizing team.
- Learn and unlearn every day by exchanging knowledge and new insights, conducting constructive code reviews, and participating in retrospectives.
Requirements
- 2+ years extensive experience of Linux server administration include patching, packaging (rpm), performance tuning, networking, user management, and security.
- 2+ years of implementing systems that are highly available, secure, scalable, and self-healingon Azure cloud platform
- Strong understanding of networking, especially in cloud environments along with a good understanding of CICD.
- Prior experience implementing industry standard security best practices, including those recommended by Azure
- Proficiency with Bash, and any high-level scripting language.
- Basic working knowledge of observability stacks like ELK, prometheus, grafana, Signoz etc
- Proficiency with Infrastructure as Code and Infrastructure Testing, preferably using Pulumi/Terraform.
- Hands-on experience in building and administering VMs and Containers using tools such as Docker/Kubernetes.
- Excellent communication skills, spoken as well as written, with a demonstrated ability to articulate technical problems and projects to all stakeholders.
EDI Developer / Map Conversion Specialist
Role Summary:
Responsible for converting 441 existing EDI maps into the PortPro-compatible format and testing them for 147 customer configurations.
Key Responsibilities:
- Analyze existing EDI maps in Profit Tools.
- Convert, reconfigure, or rebuild maps for PortPro.
- Ensure accuracy in mapping and transformation logic.
- Unit test and debug EDI transactions.
- Support system integration and UAT phases.
Skills Required:
- Proficiency in EDI standards (X12, EDIFACT) and transaction sets.
- Hands-on experience in EDI mapping tools.
- Familiarity with both Profit Tools and PortPro data structures.
- SQL and XML/JSON data handling skills.
- Experience with scripting for automation (Python, Shell scripting preferred).
- Strong troubleshooting and debugging skills.
Job Summary:
We are seeking a highly skilled and proactive DevOps Engineer with 4+ years of experience to join our dynamic team. This role requires strong technical expertise across cloud infrastructure, CI/CD pipelines, container orchestration, and infrastructure as code (IaC). The ideal candidate should also have direct client-facing experience and a proactive approach to managing both internal and external stakeholders.
Key Responsibilities:
- Collaborate with cross-functional teams and external clients to understand infrastructure requirements and implement DevOps best practices.
- Design, build, and maintain scalable cloud infrastructure on AWS (EC2, S3, RDS, ECS, etc.).
- Develop and manage infrastructure using Terraform or CloudFormation.
- Manage and orchestrate containers using Docker and Kubernetes (EKS).
- Implement and maintain CI/CD pipelines using Jenkins or GitHub Actions.
- Write robust automation scripts using Python and Shell scripting.
- Monitor system performance and availability, and ensure high uptime and reliability.
- Execute and optimize SQL queries for MSSQL and PostgreSQL databases.
- Maintain clear documentation and provide technical support to stakeholders and clients.
Required Skills:
- Minimum 4+ years of experience in a DevOps or related role.
- Proven experience in client-facing engagements and communication.
- Strong knowledge of AWS services – EC2, S3, RDS, ECS, etc.
- Proficiency in Infrastructure as Code using Terraform or CloudFormation.
- Hands-on experience with Docker and Kubernetes (EKS).
- Strong experience in setting up and maintaining CI/CD pipelines with Jenkins or GitHub.
- Solid understanding of SQL and working experience with MSSQL and PostgreSQL.
- Proficient in Python and Shell scripting.
Preferred Qualifications:
- AWS Certifications (e.g., AWS Certified DevOps Engineer) are a plus.
- Experience working in Agile/Scrum environments.
- Strong problem-solving and analytical skills.
Work Mode & Timing:
- Hybrid – Pune-based candidates preferred.
- Working hours: 12:30 PM to 9:30 PM IST to align with client time zones.
Springer Capital is a cross-border asset management firm focused on real estate investment banking in China and the USA. We are offering a remote internship for individuals passionate about automation, cloud infrastructure, and CI/CD pipelines. Start and end dates are flexible, and applicants may be asked to complete a short technical quiz or assignment as part of the application process.
Responsibilities:
▪ Assist in building and maintaining CI/CD pipelines to automate development workflows
▪ Monitor and improve system performance, reliability, and scalability
▪ Manage cloud-based infrastructure (e.g., AWS, Azure, or GCP)
▪ Support containerization and orchestration using Docker and Kubernetes
▪ Implement infrastructure as code using tools like Terraform or CloudFormation
▪ Collaborate with software engineering and data teams to streamline deployments
▪ Troubleshoot system and deployment issues across development and production environments
Tableau Server Administrator (10+ Yrs Exp.) 📊🔒
📍Location: Remote
🗓️ Experience: 10+ years
MandatorySkills & Qualifications:
1. Proven expertise in Tableau architecture, clustering, scalability, and high availability.
2. Proficiency in PowerShell, Python, or Shell scripting.
3. Experience with cloud platforms (AWS, Azure, GCP) and Tableau Cloud.
4. Familiarity with database systems (SQL Server, Oracle, Snowflake).
5. Any certification Plus.
Job Title: Senior/Lead Performance Test Engineer (JMeter Specialist)
Experience: 5-10 Years
Location: Remote / Pune, India
Job Summary:
We are looking for a highly skilled and experienced Senior/Lead Performance Test Engineer with a strong background in Apache JMeter to lead and execute performance testing initiatives for our web and mobile applications. The ideal candidate will be a hands-on expert in designing, scripting, executing, and analyzing complex performance tests, identifying bottlenecks, and collaborating with cross-functional teams to optimize system performance. This role is critical in ensuring our applications deliver exceptional user experiences under various load conditions.
Key Responsibilities:
Performance Test Strategy & Planning:
Define, develop, and implement comprehensive performance test strategies and plans aligned with project requirements and business objectives for web and mobile applications.
Collaborate with product owners, developers, architects, and operations teams to understand non-functional requirements (NFRs) and service level agreements (SLAs).
Determine appropriate performance test types (Load, Stress, Endurance, Spike, Scalability) and define relevant performance metrics and acceptance criteria.
Scripting & Test Development (JMeter Focus):
Design, develop, and maintain robust and scalable performance test scripts using Apache JMeter for various protocols (HTTP/S, REST, SOAP, JDBC, etc.).
Implement advanced JMeter features including correlation, parameterization, assertions, custom listeners, and logic controllers to simulate realistic user behavior.
Develop modular and reusable test assets.
Integrate performance test scripts into CI/CD pipelines (e.g., Jenkins, GitLab CI, Azure DevOps) for continuous performance monitoring.
Test Execution & Monitoring:
Set up and configure performance test environments, ensuring they accurately mimic production infrastructure (including cloud environments like AWS, Azure, GCP).
Execute performance tests in various environments, managing large-scale load generation using JMeter (standalone or distributed mode).
Monitor system resources (CPU, Memory, Disk I/O, Network) and application performance metrics using various tools (e.g., Grafana, Prometheus, ELK stack, AppDynamics, Dynatrace, New Relic) during test execution.
Analysis & Reporting:
Analyze complex performance test results, identify performance bottlenecks, and pinpoint root causes across application, database, and infrastructure layers.
Interpret monitoring data, logs, and profiling reports to provide actionable insights and recommendations for performance improvements.
Prepare clear, concise, and comprehensive performance test reports, presenting findings, risks, and optimization recommendations to technical and non-technical stakeholders.
Collaboration & Mentorship:
Work closely with development and DevOps teams to troubleshoot, optimize, and resolve performance issues.
Act as a subject matter expert in performance testing, providing technical guidance and mentoring to junior team members.
Contribute to the continuous improvement of performance testing processes, tools, and best practices.
Required Skills & Qualifications:
Bachelor's degree in Computer Science, Engineering, or a related field.
5-10 years of hands-on experience in performance testing, with a strong focus on web and mobile applications.
Expert-level proficiency with Apache JMeter for scripting, execution, and analysis.
Strong understanding of performance testing methodologies, concepts (e.g., throughput, response time, latency, concurrency), and lifecycle.
Experience with performance monitoring tools such as Grafana, Prometheus, CloudWatch, Azure Monitor, GCP Monitoring, AppDynamics, Dynatrace, or New Relic.
Solid understanding of web technologies (HTTP/S, REST APIs, WebSockets, HTML, CSS, JavaScript) and modern application architectures (Microservices, Serverless).
Experience with database performance analysis (SQL/NoSQL) and ability to write complex SQL queries.
Familiarity with cloud platforms (AWS, Azure, GCP) and experience in testing applications deployed in cloud environments.
Proficiency in scripting languages (e.g., Groovy, Python, Shell scripting) for custom scripting and automation.
Excellent analytical, problem-solving, and debugging skills.
Strong communication (written and verbal) and interpersonal skills, with the ability to effectively collaborate with diverse teams and stakeholders.
Ability to work independently, manage multiple priorities, and thrive in a remote or hybrid work setup.
Good to Have Skills:
Experience with other performance testing tools (e.g., LoadRunner, Gatling, k6, BlazeMeter).
Knowledge of CI/CD pipelines and experience integrating performance tests into automated pipelines.
Understanding of containerization technologies (Docker, Kubernetes).
Experience with mobile application performance testing tools and techniques (e.g., device-level monitoring, network emulation).
Certifications in performance testing or cloud platforms.
Salesforce DevOps/Release Engineer
Resource type - Salesforce DevOps/Release Engineer
Experience - 5 to 8 years
Norms - PF & UAN mandatory
Resource Availability - Immediate or Joining time in less than 15 days
Job - Remote
Shift timings - UK timing (1pm to 10 pm or 2pm to 11pm)
Required Experience:
- 5–6 years of hands-on experience in Salesforce DevOps, release engineering, or deployment management.
- Strong expertise in Salesforce deployment processes, including CI/CD pipelines.
- Significant hands-on experience with at least two of the following tools: Gearset, Copado,Flosum.
- Solid understanding of Salesforce architecture, metadata, and development lifecycle.
- Familiarity with version control systems (e.g., Git) and agile methodologies
Key Responsibilities:
- Design, implement, and manage CI/CD pipelines for Salesforce deployments using Gearset, Copado, or Flosum.
- Automate and optimize deployment processes to ensure efficient, reliable, and repeatable releases across Salesforce environments.
- Collaborate with development, QA, and operations teams to gather requirements and ensurealignment of deployment strategies.
- Monitor, troubleshoot, and resolve deployment and release issues.
- Maintain documentation for deployment processes and provide training on best practices.
- Stay updated on the latest Salesforce DevOps tools, features, and best practices.
Technical Skills:
- Deployment ToolsHands-on with Gearset, Copado, Flosum for Salesforce deployments
- CI/CDBuilding and maintaining pipelines, automation, and release management
- Version ControlProficiency with Git and related workflows
- Salesforce PlatformUnderstanding of metadata, SFDX, and environment management
- Scripting
- Familiarity with scripting (e.g., Shell, Python) for automation (preferred)
- Communication
- Strong written and verbal communication skills
Preferred Qualifications:
Bachelor’s degree in Computer Science, Information Technology, or related field.
Certifications:
Salesforce certifications (e.g., Salesforce Administrator, Platform Developer I/II) are a plus.
Experience with additional DevOps tools (Jenkins, GitLab, Azure DevOps) is beneficial.
Experience with Salesforce DX and deployment strategies for large-scale orgs.
Job Title : Ab Initio Developer
Location : Pune
Experience : 5+ Years
Notice Period : Immediate Joiners Only
Job Summary :
We are looking for an experienced Ab Initio Developer to join our team in Pune.
The ideal candidate should have strong hands-on experience in Ab Initio development, data integration, and Unix scripting, with a solid understanding of SDLC and data warehousing concepts.
Mandatory Skills :
Ab Initio (GDE, EME, graphs, parameters), SQL/Teradata, Data Warehousing, Unix Shell Scripting, Data Integration, DB Load/Unload Utilities.
Key Responsibilities :
- Design and develop Ab Initio graphs/plans/sandboxes/projects using GDE and EME.
- Manage and configure standard environment parameters and multifile systems.
- Perform complex data integration from multiple source and target systems with business rule transformations.
- Utilize DB Load/Unload Utilities effectively for optimized performance.
- Implement generic graphs, ensure proper use of parallelism, and maintain project parameters.
- Work in a data warehouse environment involving SDLC, ETL processes, and data analysis.
- Write and maintain Unix Shell Scripts and use utilities like sed, awk, etc.
- Optimize and troubleshoot performance issues in Ab Initio jobs.
Mandatory Skills :
- Strong expertise in Ab Initio (GDE, EME, graphs, parallelism, DB utilities, multifile systems).
- Experience with SQL and databases like SQL Server or Teradata.
- Proficiency in Unix Shell Scripting and Unix utilities.
- Data integration and ETL from varied source/target systems.
Good to Have :
- Experience in Ab Initio and AWS integration.
- Knowledge of Message Queues and Continuous Graphs.
- Exposure to Metadata Hub.
- Familiarity with Big Data tools such as Hive, Impala.
- Understanding of job scheduling tools.
Job Title : IBM Sterling Integrator Developer
Experience : 3 to 5 Years
Locations : Hyderabad, Bangalore, Mumbai, Gurgaon, Chennai, Pune
Employment Type : Full-Time
Job Description :
We are looking for a skilled IBM Sterling Integrator Developer with 3–5 years of experience to join our team across multiple locations.
The ideal candidate should have strong expertise in IBM Sterling and integration, along with scripting and database proficiency.
Key Responsibilities :
- Develop, configure, and maintain IBM Sterling Integrator solutions.
- Design and implement integration solutions using IBM Sterling.
- Collaborate with cross-functional teams to gather requirements and provide solutions.
- Work with custom languages and scripting to enhance and automate integration processes.
- Ensure optimal performance and security of integration systems.
Must-Have Skills :
- Hands-on experience with IBM Sterling Integrator and associated integration tools.
- Proficiency in at least one custom scripting language.
- Strong command over Shell scripting, Python, and SQL (mandatory).
- Good understanding of EDI standards and protocols is a plus.
Interview Process :
- 2 Rounds of Technical Interviews.
Additional Information :
- Open to candidates from Hyderabad, Bangalore, Mumbai, Gurgaon, Chennai, and Pune.
Overview
As an engineer in the Service Operations division, you will be responsible for the day-to-day management of the systems and services that power client products. Working with your team, you will ensure daily tasks and activities are successfully completed and where necessary, use standard operating procedures and knowledge to resolve any faults/errors encountered.
Job Description
Key Tasks and Responsibilities:
Ensure daily tasks and activities have successfully completed. Where this is not the case, recovery and remediation steps will be undertaken.
Undertake patching and upgrade activities in support of ParentPay compliance programs. These being PCI DSS, ISO27001 and Cyber Essentials+.
Action requests from the ServiceNow work queue that have been allocated to your relevant resolver group. These include incidents, problems, changes and service requests.
Investigate alerts and events detected from the monitoring systems that indicate a change in component health.
Create and maintain support documentation in the form of departmental wiki and ServiceNow knowledge articles that allow for continual improvement of fault detection and recovery times.
Work with colleagues to identify and champion the automation of all manual interventions undertaken within the team.
Attend and complete all mandatory training courses.
Engage and own the transition of new services into Service Operations.
Participate in the out of hours on call support rota.
Qualifications and Experience:
Experience working in an IT service delivery or support function OR
MBA or Degree in Information Technology or Information Security.
Experience working with Microsoft technologies.
Excellent communication skills developed working in a service centric organisation.
Ability to interpret fault descriptions provided by customers or internal escalations and translate these into resolutions.
Ability to manage and prioritise own workload.
Experience working within Education Technology would be an advantage.
Technical knowledge:
Advanced automation scripting using Terraform and Powershell.
Knowledge of bicep and ansible advantageous.
Advanced Microsoft Active Directory configuration and support.
Microsoft Azure and AWS cloud hosting platform administration.
Advanced Microsoft SQL server experience.
Windows Server and desktop management and configuration.
Microsoft IIS web services administration and configuration.
Advanced management of data and SQL backup solutions.
Advanced scripting and automation capabilities.
Advanced knowledge of Azure analytics and KQL.
Skills & Requirements
IT Service Delivery, Information Technology, Information Security, Microsoft Technologies, Communication Skills, Fault Interpretation, Workload Prioritization, Automation Scripting, Terraform, PowerShell, Microsoft Active Directory, Microsoft Azure, AWS, Microsoft SQL Server, Windows Server, Windows Desktop Configuration, Microsoft IIS, Data Backup Management, SQL Backup Solutions, Scripting, Azure Analytics, KQL.
Dear Candidate,
We are urgently looking for a Release- Big data Engineer For Pune Location.
Experience : 5-8 yrs
Location : Pune
Skills: Big data Engineer , Release Engineer ,DevOps, Aws/Azure/GCP Cloud exp. ,
JD:
- Oversee the end-to-end release lifecycle, from planning to post-production monitoring. Coordinate with cross-functional teams (DBA, BizOps, DevOps, DNS).
- Partner with development teams to resolve technical challenges in deployment and automation test runs
- Work with shared services DBA teams for schema-based multi-tenancy designs and smooth migrations.
- Drive automation for batch deployments and DR exercises. YAML based micro service deployment using shell/Python/Go
- Provide oversight for Big Data toolsets for deployment (e.g., Spark, Hive, HBase) in private cloud and public cloud CDP environments
- Ensure high-quality releases with a focus on stability and long-term performance.
- Able to run the automation batch scripts and debug the deployment and functional aspects/ work with dev leads to resolve the release cycle issues.
Regards,
Minakshi Soni
Executive- Talent Acquisition
Rigel Networks
Dear Candidate,
We are urgently Hiring AWS Cloud Engineer for Bangalore Location.
Position: AWS Cloud Engineer
Location: Bangalore
Experience: 8-11 yrs
Skills: Aws Cloud
Salary: Best in Industry (20-25% Hike on the current ctc)
Note:
only Immediate to 15 days Joiners will be preferred.
Candidates from Tier 1 companies will only be shortlisted and selected
Candidates' NP more than 30 days will get rejected while screening.
Offer shoppers will be rejected.
Job description:
Description:
Title: AWS Cloud Engineer
Prefer BLR / HYD – else any location is fine
Work Mode: Hybrid – based on HR rule (currently 1 day per month)
Shift Timings 24 x 7 (Work in shifts on rotational basis)
Total Experience in Years- 8+ yrs, 5 yrs of relevant exp is required.
Must have- AWS platform, Terraform, Redshift / Snowflake, Python / Shell Scripting
Experience and Skills Requirements:
Experience:
8 years of experience in a technical role working with AWS
Mandatory
Technical troubleshooting and problem solving
AWS management of large-scale IaaS PaaS solutions
Cloud networking and security fundamentals
Experience using containerization in AWS
Working Data warehouse knowledge Redshift and Snowflake preferred
Working with IaC – Terraform and Cloud Formation
Working understanding of scripting languages including Python and Shell
Collaboration and communication skills
Highly adaptable to changes in a technical environment
Optional
Experience using monitoring and observer ability toolsets inc. Splunk, Datadog
Experience using Github Actions
Experience using AWS RDS/SQL based solutions
Experience working with streaming technologies inc. Kafka, Apache Flink
Experience working with a ETL environments
Experience working with a confluent cloud platform
Certifications:
Minimum
AWS Certified SysOps Administrator – Associate
AWS Certified DevOps Engineer - Professional
Preferred
AWS Certified Solutions Architect – Associate
Responsibilities:
Responsible for technical delivery of managed services across NTT Data customer account base. Working as part of a team providing a Shared Managed Service.
The following is a list of expected responsibilities:
To manage and support a customer’s AWS platform
To be technical hands on
Provide Incident and Problem management on the AWS IaaS and PaaS Platform
Involvement in the resolution or high priority Incidents and problems in an efficient and timely manner
Actively monitor an AWS platform for technical issues
To be involved in the resolution of technical incidents tickets
Assist in the root cause analysis of incidents
Assist with improving efficiency and processes within the team
Examining traces and logs
Working with third party suppliers and AWS to jointly resolve incidents
Good to have:
Confluent Cloud
Snowflake
Best Regards,
Minakshi Soni
Executive - Talent Acquisition (L2)
Rigel Networks
Worldwide Locations: USA | HK | IN
Job Description
Position - SRE developer / DevOps Engineer
Location - Mumbai
Experience- 3- 10 years
About HaystackAnalytics:
HaystackAnalytics is a company working in deep technology of genomics, computing and data science for creating the first of its kind clinical reporting engine in Healthcare. We are a new but well funded company with a tremendous amount of pedigree in the team (IIT Founders, IIT & IIM core team). Some of the technologies we have created are a global first in infectious disease and chronic diagnostics. As a product company creating a huge amount of IP, our Technology and R&D team are our crown jewels. With early success of our products in India, we are now expanding to take our products to international shores.
Inviting Passionate Engineers to join a new age enterprise:
At HaystackAnalytics, we rely on our dynamic team of engineers to solve the many challenges and puzzles that come with our rapidly evolving stack that deals with Healthcare and Genomics.
We’re looking for full stack engineers who are passionate problem solvers, ready to work with new technologies and architectures in a forward-thinking organization that’s always pushing boundaries. Here, you will take complete, end-to-end ownership of projects across the entire stack.
Our ideal candidate has experience building enterprise products and has understanding and experience of working with new age front end technologies, web frameworks, APIs, databases, distributed computing,back end languages, caching, security, message based architectures et al.
You’ll be joining a small team working at the forefront of new technology, solving the challenges that impact both the front end and back end architecture, and ultimately, delivering amazing global user experiences.
Objectives of this Role:
- Work across the full stack, building highly scalable distributed solutions that enable positive user experiences and measurable business growth
- Ideate and develop new product features in collaboration with domain experts in healthcare and genomics
- Develop state of the art enterprise standard front-end and backend services
- Develop cloud platform services based on container orchestration platform
- Continuously embrace automation for repetitive tasks
- Ensure application performance, uptime, and scale, maintaining high standards of code quality by using clean coding principles and solid design patterns
- Build robust tech modules that are Unit Testable, Automating recurring tasks and processes
- Engage effectively with team members and collaborate to upskill and unblock each other
Frontend Skills
- HTML 5
- CSS framework ( LESS/ SASS )
- Es6 / Typescript
- Electron app / TAURI
- Component library ( Webcomponents / radix / material )
- CSS ( tailwind)
- State management --> Redux / Zustand / Recoil
- Build tools - > (webpack/vite/Parcel/turborepo)
- Frameworks -- > Next JS /
- Design patterns
- Test Automation Frameworks (cypress playwright etc )
- Functional Programming concepts
- Scripting ( bash , python )
Backend Skills
- Node / Deno / bun - Express / NEST JS
- Language : Typescript / Python / Rust /
- REST / GRAPHQL
- SOLID Design Principles
- Storage (mongodb/ Object Storage / postgres )
- Caching ( Redis / In memory Data grid )
- Pub sub (KAFKA / SQS / SNS / Event bridge / RabbitMQ)
- Container Technology ( Docker / Kubernetes )
- Cloud ( azure , aws , openshift )
- GITOPS
- Automation ( terraform , Serverless )
Other Skills
- Innovation and thought leadership
- UI - UX design skills
- Interest in learning new tools, languages, workflows, and philosophies to grow
- Communication
- Advanced Linux / Unix support experience required.
- · Strong shell scripting and python programming skills for SRE related activities required.
- · Understanding on Veritas Cluster Service, Load Balancers, VMWare and Splunk required.
- · Knowledge on ITIL principles required.
- · Effective oral and written communication skills, and interpersonal skills to work well in a team environment required.
- · Strong organizational and coordination skills with the ability to manage multiple tasks and high-pressure situations for outage handling, management or resolution.
- · Be available for weekend work.
Position Responsibilities :
- Prime focuses on installation, testing, and configuration for Oracle, ASM, backup/recovery, security, monitoring, maintenance, logging, HA, and DR setup.
- Managing hundreds of Oracle databases using more innovative routines/tools to identify the opportunities for better security and availability
- Setup and manage Oracle OEM
- Analyze, solve, and correct issues in real time
- Bring standards across environments and products by developing SOPs.
- Optimize day-to-day operations and bring efficiencies
- Plan and implement industry best practices for the best utilization of available resources
- Perform duties in compliance with Deltek, industry, and regulatory security and compliance standards as defined
- Work with cross-functional teams to assist in application and hosting optimization
- Respond effectively to high-priority issues and incidents when needed
- Provide Technically expertise and training to internal and external teams for Oracle Administration
- Work independently and with a global team on highly specialized projects
- Provide on-call support as required
- Perform duties in compliance with Deltek, industry, and regulatory security and compliance standards as defined.
Qualifications :
- 10+ years of Oracle DBA experience, at least 4 of which are in 12c or newer with multitenancy in a virtualized environment, AWS is preferred
- In-depth knowledge and extensive hands-on experience with Oracle, Grid, ASM, OEM, Fleet Management, RMAN, Data Guard, TDE, CMU, Patching, and other critical components of Oracle
- In-depth knowledge of Pluggable Databases, Virtualize Platform, Linux OS environments, and operating system Internals
- Thorough understanding of configuration options, configurable components, concepts, and technologies
- Strong expertise and hands-on with installation, configuration, backup, and restoration of all data
- Extensive experience in troubleshooting and maintaining 100+ database instances using a variety of tools
- Experience with large-scale database management designs, best practices, and issues
- Good experience with database scripting Shell/python scripting to manage effort-consuming tasks in a more innovative way
- Excellent verbal and written communication skills
- Excellent time management and prioritization skills with an ability to escalate concerns or questions
- Data-driven solid analytical skills, ability to create reports and present decision-supporting data/patterns
- A plus is knowledge and experience in OCI (Oracle Cloud Infrastructure), Snowflake, and Rubrik.
- Individuals will be working European hours to support Denmark.
- Perform duties in compliance with Deltek, industry, and regulatory security and compliance standards as defined.
- Improves and contributes towards the team's delivery process and raises change requests appropriately. Can estimate the effort required and ensure priority & urgency are understood. Ensures appropriate monitoring is in place.
- Displays good skills in executing and creating solutions for toil reduction and efficiency. Exercises abilities in troubleshooting, critical thinking, and problem-solving.
- I can handle all incidents and lead incident response with appropriate communication. I take a systematic approach to problems and see them through to the conclusion.
- Deliver business value by improving functional/product knowledge by executing best practices and contributing ideas toward innovation.
- Ability to exercise analytical and technical thinking across multiple areas of responsibility.
- When delivering tasks, collaborating, and managing feedback, the sphere of influence influences the supervisor, peers, clients, and other teams. Decisions' impact affects a wide range of teams or areas of responsibility and facilitates the crafting and implementation of strategies.
Job Title: Database Engineer
Location: Bangalore, Karnataka
Company: Wissen Technology
Experience: 5-7 Years
Joining Period: Immediate candidates, Currently Serving, 15-20 days ONLY
About Us:
Wissen Technology is a global leader in technology consulting and development. We specialize in delivering top-notch solutions for Fortune 500 companies across various industries. Join us to work on cutting-edge projects in a collaborative and innovative environment.
Key Responsibilities:
- Develop and maintain ETL processes using the Informatica ETL tool, adhering to best practices.
- Write efficient and optimized SQL queries and manage database operations.
- Create and manage UNIX shell scripts to support automation and database activities.
- Analyze business requirements, propose robust solutions, and implement them effectively.
- Work collaboratively with global teams to deliver high-quality solutions within an Agile framework.
- Leverage JIRA or other ALM tools to maintain a productive development environment.
- Stay updated with new technologies and concepts to address business challenges innovatively.
Required Skills:
- Proficiency in Informatica ETL tools and ETL processes.
- Strong SQL database expertise.
- Advanced hands-on experience with UNIX shell scripting.
- Experience working in Agile methodologies.
- Familiarity with JIRA or similar ALM tools.
- Excellent problem-solving, verbal, and written communication skills.
- Proven ability to collaborate with global teams effectively.
Desired Skills:
- Knowledge of financial markets, payment solutions, and wealth management.
- Experience with Spark scripting (preferred but not mandatory).
Qualifications:
- BTech / MTech/ MCA or any related field
- Candidate needs to be an immediate joiner / serving notice period / 15-20 days
Why Join Wissen Technology?
- Opportunity to be part of a growing team focused on data-driven innovation and quality.
- Exposure to global clients and complex data management projects.
- Competitive benefits, including health coverage, paid time off, and a collaborative work environment.
At Wissen Technology, we value our team members and their contributions. We offer competitive compensation, opportunities for growth, and an environment where your ideas can make a difference!
We look forward to welcoming a detail-oriented and driven Data Analyst Associate to our team!
Job Description:
We are looking for a highly skilled Software Developer with 3-5 years of hands-on experience in LAMP Stack development. The ideal candidate will be responsible for developing and maintaining web applications, ensuring high performance, and collaborating with cross-functional teams to define and ship new features.
Key Responsibilities:
• Develop, maintain, and optimize web applications using LAMP Stack (Linux, Apache, MySQL, PHP)
• Design efficient, scalable, and secure backend systems.
• Collaborate with frontend developers and project stakeholders to deliver high-quality products.
• Write well-structured and testable code, following best practices in PHP.
• Troubleshoot, debug, and upgrade existing applications.
• Develop APIs and integrate third-party services.
• Manage and maintain databases using MySQL.
• Perform system and server maintenance on Linux/Unix environments.
• Use Shell scripting to automate routine tasks and deployments.
• Work with the ELK stack (Elasticsearch, Logstash, Kibana) for logging and monitoring.
Qualifications:
• BE/B.Tech in Computer Science/Information Technology/Electronics and Communications
• 3-5 years of proven experience as a Software Developer working with LAMP Stack
• Proficient in PHP, and MySQL.
• Solid experience with Linux/Unix environments, including command-line proficiency.
• Knowledge of Shell Scripting and basic system automation.
• Familiarity with ELK Stack for application monitoring and logging.
• Strong understanding of web protocols, API integration, and database optimization.
• Experience with version control systems like Git.
• Ability to work in an Agile environment and manage multiple tasks efficiently.
Nice to Have:
• Experience with other Python frameworks besides Flask.
• Knowledge of cloud platforms (AWS, Google Cloud) for deploying and scaling applications.
• Familiarity with containerisation technologies like Docker.
Benefits:
• Competitive salary and performance bonuses.
• Medical and insurance coverage.
• Opportunities for professional development and growth.
• Flexible working hours and remote working options.
Job Requirements:
Intermediate Linux Knowledge
- Experience with shell scripting
- Familiarity with Linux commands such as grep, awk, sed
- Required
Advanced Python Scripting Knowledge
- Strong expertise in Python
- Required
Ruby
- Nice to have
Basic Knowledge of Network Protocols
- Understanding of TCP/UDP, Multicast/Unicast
- Required
Packet Captures
- Experience with tools like Wireshark, tcpdump, tshark
- Nice to have
High-Performance Messaging Libraries
- Familiarity with tools like Tibco, 29West, LBM, Aeron
- Nice to have
Key Responsibilities:
• Install, configure, and maintain Hadoop clusters.
• Monitor cluster performance and ensure high availability.
• Manage Hadoop ecosystem components (HDFS, YARN, Ozone, Spark, Kudu, Hive).
• Perform routine cluster maintenance and troubleshooting.
• Implement and manage security and data governance.
• Monitor systems health and optimize performance.
• Collaborate with cross-functional teams to support big data applications.
• Perform Linux administration tasks and manage system configurations.
• Ensure data integrity and backup procedures.
What You Can Expect from Us:
Here at Nomiso, we work hard to provide our team with the best opportunities to grow their careers. You can expect to be a pioneer of ideas, a student of innovation, and a leader of thought. Innovation and thought leadership is at the center of everything we do, at all levels of the company. Let’s make your career great!
Position Overview:
The Principal Cloud Network Engineer is a key interface to Client teams and is responsible to develop convincing technical solutions. This requires them to work closely with clients for multiple partner-vendors teams to architect the solution.
This position requires sound technical knowledge, proven business acumen and differentiating client interfacing ability. You are required to anticipate, create, and define an innovative solution which matches customer’s needs and the clients tactical and strategic requirements.
Roles and Responsibilities:
- Design and implement next-generation networking technologies
- Deploy/support large-scale production network
- Track, analyze, and trend capacity on the broadcast network and datacenter infrastructure
- Provide Tier 3 escalated network support
- Perform fault management and problem resolution
- Work closely with other departments, vendors, and service providers
- Perform network change management, support modifications, and maintenance
- Perform network upgrade, maintenance, and repair work
- Lead implementation of new systems
- Perform capacity planning and management
- Suggest opportunities for improvement
- Create and support network management objectives, policies, and procedures
- Ensure network documentation is kept up-to-date
- Train and assist junior engineers.
Must Have Skills:
Candidates with overall 10+ years of experience in the following:
- Hands-on: Routers/Switches, Firewalls (Palo Alto or similar), Load Balancer (RTM, GTM), AWS (VPC , API Gateway , Cloudfront , Route53, CloudVAN, Directconnect, Privatelink, Transit Gateway ) Networking, Wireless.
- Strong hands-on coding/scripting experience in one or more programming languages such as Python, Golang, Java, Bash, etc.
- Networking technologies: Routing Protocols (BGP, EIGRP & OSPF, VRFs, VLANs, VRRP, LACP, MLAG, TACACS / Rancid / GIT, IPSec VPN, DNS / DHCP, NAT / SNAT, IP Multicast, VPC, Transit Gateway, NAT Gateway, ALB/ELB), Security Groups, ACL, HSRP, VRRP, SNMP, DHCP.
- Managing hardware, IOS, coordinating with vendors/partners for support.
- Managing CDN, Links, VPN technologies, SDN/Cisco ACI ( Design and implementaion ) and Network Function Virtualization (NFV).
- Reviewing technology designs, and architecture, taking local and regional regulatory requirements into account for Voice, Video Solutions, Routing, Switching, VPN, LAN, WAN, Network Security, Firewalls, NGFW, NAT, IPS, Botnet, Application Control, DDoS, Web Filtering.
- Palo Alto Firewall / Panorama, Big-IQ, and NetBrain tools/technology standards to daily support and enhance performance, improve reliability .
- Creating a real-time contextual living map of Client’s network with detailed network specifications, including diagrams, equipment configurations with defined standards
- Improve the reliability of the service, bring in proactiveness to identify and prevent impact to customers by eliminating Single Point of Failure (SPOF).
- Capturing critical forensic data, and providing complete visibility across the enterprise, for security incidents as soon as a threat is detected, by implementing tools like NetBrain.
Good to Have Skills:
- Industry certifications on Switching, Routing and Security.
- Elastic Load Balancing (ELB), DNS / DHCP, IPSec VPN,Multicast, TACACS / Rancid / GIT, ALB/ELB
- AWS Control Tower
- Experience leading a team of 5 or more.
- Strong Analytical and Problem Solving Skills.
- Experience implementing / maintaining Infrastructure as Code (IaC)
- Certifications : CCIE, AWS Certified Advanced Networking
Position Description
We are looking for a highly motivated, hands-on Sr. Database/Data Warehouse Data Analytics developer to work at our Bangalore, India location. Ideal candidate will have solid software technology background with capability in the making and supporting of robust, secure, and multi-platform financial applications to contribute to Fully Paid Lending (FPL) project. The successful candidate will be a proficient and productive developer, a team leader, have good communication skills, and demonstrate ownership.
Responsibilities
- Produce service metrics, analyze trends, and identify opportunities to improve the level of service and reduce cost as appropriate.
- Responsible for design, development and maintenance of database schema and objects throughout the lifecycle of the applications.
- Supporting implemented solutions by monitoring and tuning queries and data loads, addressing user questions concerning data integrity, monitoring performance, and communicating functional and technical issues.
- Helping the team by taking care of production releases.
- Troubleshoot data issues and work with data providers for resolution.
- Closely work with business and applications teams in implementing the right design and solution for the business applications.
- Build reporting solutions for WM Risk Applications.
- Work as part of a banking Agile Squad / Fleet.
- Perform proof of concepts in new areas of development.
- Support continuous improvement of automated systems.
- Participate in all aspects of SDLC (analysis, design, coding, testing and implementation).
Required Skill
- 5 to 7 Years of strong database (SQL) Knowledge, ETL (Informatica PowerCenter), Unix Shell Scripting.
- Database (preferably Teradata) knowledge, database design, performance tuning, writing complex DB programs etc.
- Demonstrate proficient skills in analysis and resolution of application performance problems.
- Database fundamentals; relational and Datawarehouse concepts.
- Should be able to lead a team of 2-3 members and guide them in their day to work technically and functionally.
- Ensure designs, code and processes are optimized for performance, scalability, security, reliability, and maintainability.
- Understanding of requirements of large enterprise applications (security, entitlements, etc.)
- Provide technical leadership throughout the design process and guidance with regards to practices, procedures, and techniques. Serve as a guide and mentor for junior level Software Development Engineers
- Exposure to JIRA or other ALM tools to create a productive, high-quality development environment.
- Proven experience in working within an Agile framework.
- Strong problem-solving skills and the ability to produce high quality work independently and work well in a team.
- Excellent communication skills (written, interpersonal, presentation), with the ability to easily and effectively interact and negotiate with business stakeholders.
- Ability and strong desire to learn new languages, frameworks, tools, and platforms quickly.
- Growth mindset, personal excellence, collaborative spirit
Good to have skills.
- Prior work experience with Azure or other cloud platforms such as Google Cloud, AWS, etc.
- Exposure to programming languages python/R/ java and experience with implementing Data analytics projects.
- Experience in Git and development workflows.
- Prior experience in Banking and Financial domain.
- Exposure to security-based lending is a plus.
- Experience with Reporting/BI Tools is a plus.
We are seeking a dedicated and skilled AI Project Field Engineer to join our team. The successful candidate will be responsible for executing AI projects on-site, ensuring the seamless deployment and operation of AI models and systems. This role requires a combination of technical expertise, problem-solving skills, and a strong customer focus.
Responsibilities:
- Execute and manage AI projects on customer sites, ensuring timely and successful deployment.
- Deploy and run AI models using PyTorch on various hardware configurations.
- Set up and maintain computer networks, particularly those involving IP cameras.
- Write and maintain shell scripts to automate deployment and monitoring tasks.
- Develop and troubleshoot Python code related to AI models and their deployment.
- Collaborate with customers to understand their needs and ensure their success with our AI solutions.
- Perform on-site visits as required to install, test, and troubleshoot AI systems.
- Provide training and support to customers on the use and maintenance of deployed AI systems.
- Work closely with the development team to provide feedback and insights from the field.
- Document all processes, configurations, and customer interactions for future reference.
There are various job roles within software development, each with its own focus and responsibilities. Some common job roles include:
1. **Software Engineer/Developer**: This role involves designing, developing, and testing software applications or systems.
2. **Front-end Developer**: Front-end developers focus on creating the user interface and experience of websites or applications using languages like HTML, CSS, and JavaScript.
3. **Back-end Developer**: Back-end developers work on the server-side of applications, managing databases, servers, and application logic using languages like Python, Java, or Node.js.
4. **Full-stack Developer**: Full-stack developers have expertise in both front-end and back-end development, allowing them to work on all aspects of an application.
5. **Mobile App Developer**: Mobile app developers specialize in creating applications for mobile devices, often using platforms like iOS (Swift) or Android (Java/Kotlin).
6. **DevOps Engineer**: DevOps engineers focus on streamlining the development process by automating tasks, managing infrastructure, and ensuring smooth deployment and operation of software.
7. **Quality Assurance (QA) Engineer**: QA engineers are responsible for testing software to ensure it meets quality standards and is free of bugs or errors.
8. **UI/UX Designer**: UI/UX designers work on designing the user interface and experience of software applications, focusing on usability and aesthetics.

SAMA, along with its team of senior experts in Electronics,
JOB DESCRIPTION:
Position : Linux BSP developer
Location : Bangalore
Experience : 3 to 10 Years
Requirements :
- 3 to 10 Years of proficiency working on C and Embedded Linux BSP (Board Support Package).
- Highly proficient and possess working in Linux kernel and Linux device drivers.
- Hands on experience on working on platform of MIPS, ARM etc.
- Working knowledge and strong understanding of Device Tree.
- Understanding of Make files, their customization and Cross Compilation and Shell scripting.
- Experience in working on U-boot.
- Video and Camera domain knowledge will be a BIG advantage.
- Knowledge of secure boot would be an added advantage.
Role: Oracle DBA Developer
Location: Hyderabad
Required Experience: 8 + Years
Skills : DBA, Terraform, Ansible, Python, Shell Script, DevOps activities, Oracle DBA, SQL server, Cassandra, Oracle sql/plsql, MySQL/Oracle/MSSql/Mongo/Cassandra, Security measure configuration
Roles and Responsibilities:
1. 8+ years of hands-on DBA experience in one or many of the following: SQL Server, Oracle, Cassandra
2. DBA experience in a SRE environment will be an advantage.
3. Experience in Automation/building databases by providing self-service tools. analyze and implement solutions for database administration (e.g., backups, performance tuning, Troubleshooting, Capacity planning)
4. Analyze solutions and implement best practices for cloud database and their components.
5. Build and enhance tooling, automation, and CI/CD workflows (Jenkins etc.) that provide safe self-service capabilities to th6. Implement proactive monitoring and alerting to detect issues before they impact users. Use a metrics-driven approach to identify and root cause performance and scalability bottlenecks in the system.
7. Work on automation of database infrastructure and help engineering succeed by providing self-service tools.
8. Write database documentation, including data standards, procedures, and definitions for the data dictionary (metadata)
9. Monitor database performance, control access permissions and privileges, capacity planning, implement changes and apply new patches and versions when required.
10. Recommend query and schema changes to optimize the performance of database queries.
11. Have experience with cloud-based environments (OCI, AWS, Azure) as well as On-Premises.
12. Have experience with cloud database such as SQL server, Oracle, Cassandra
13. Have experience with infrastructure automation and configuration management (Jira, Confluence, Ansible, Gitlab, Terraform)
14. Have excellent written and verbal English communication skills.
15. Planning, managing, and scaling of data stores to ensure a business’ complex data requirements are met and it can easily access its data in a fast, reliable, and safe manner.
16. Ensures the quality of orchestration and integration of tools needed to support daily operations by patching together existing infrastructure with cloud solutions and additional data infrastructures.
17. Data Security and protecting the data through rigorous testing of backup and recovery processes and frequently auditing well-regulated security procedures.
18. use software and tooling to automate manual tasks and enable engineers to move fast without the concern of losing data during their experiments.
19. service level objectives (SLOs), risk analysis to determine which problems to address and which problems to automate.
20. Bachelor's Degree in a technical discipline required.
21. DBA Certifications required: Oracle, SQLServer, Cassandra (2 or more)
21. Cloud, DevOps certifications will be an advantage.
Must have Skills:
Ø Oracle DBA with development
Ø SQL
Ø Devops tools
Ø Cassandra
Machint Solutions, a US registered IT & Digital automation Products and Services organization is seeking to hire couple of LINUX ADMINISTRATORS for its office in WHITEFIELDS, KONDAPUR, HYDERABAD, TELANAGANA.
Job description
- Minimum 5 years of strong Linux (RHEL & SuSE) Admin knowledge & troubleshooting skills.
- Must know Storage integration with Linux.
- Must have strong scripting (Bash or Shell) knowledge.
- Cluster Knowledge (RHEL & SuSE)
- Monitoring & Patching tools knowledge
- Should have good experience with AWS Cloud & VMWare
- Networking Knowledge with respect to Linux
- Work Location: Machint Solutions Private Limited., Whitefields, Kondapur, Hyderabad
- Notice period: Candidates who can join in 2 weeks are preferred.
- Interview: F2F at our office - Between 11 AM and 6 PM Monday through Friday
- Budget: Market standards
Please share your updated resume on ram dot n at machint dot com with salary and notice period info to initiate the hiring process.
Role & responsibilities
- Provide technical support to customers via email, phone, and ticketing system.
- Troubleshoot and resolve customer issues related to our API Services.
- Prepare customized reports for clients using SQL queries to extract information from database.
- Perform API integration in third-party applications.
- Run APIs in Postman / Unix command line to re-produce error conditions
- Document and track customer issues and resolutions in our ticketing system.
Preferred candidate profile
- Bachelor's degree in Computer Science or related field.
- 2+ years of experience in PHP development or Technical Support role.
- Strong knowledge of API integration and testing.
- Strong knowledge of PHP and MySQL.
- Excellent communication and interpersonal skills.
Perks and benefits
- Flexitime for Work-Life Balance
- Complimentary Office Meals
- Paid Sick Leave and Time Off
- Enjoy a Vibrant Workplace Culture
With over 40 years of innovation, Quantum's end-to-end platform is uniquely equipped to orchestrate protect, and enrich data across its lifecycle, providing enhanced intelligence and actionable insights. Leading organizations in cloud services, entertainment, government, research, education, transportation, and enterprise IT trust Quantum to bring their data to life, because data makes life better, safer, and smarter. Quantum is listed on Nasdaq (QMCO) and the Russell 2000® Index. For more information visit www.quantum.com.
As a Software Engineer, you will collaborate with engineers and product managers on the development and maintenance of Quantum’s DXi-Series of disk-based backup appliance software. Quantum’s DXi series protects our customers data on premises, in the cloud, or in a hybrid environment.
You Are A Part Of:
DXi is a uniquely powerful solution within the Quantum portfolio, allowing customers to meet and exceed their backup needs with one of the fastest products on the market. You’ll work on a product that allows customers to reduce costs, maximize production, scale with ease, and positively impact the environment by reducing power and cooling requirements.
Job Responsibilities:
Responsibilities include, but are not limited to:
• Write code primarily for Linux systems, with programming languages including Python, C, C++, and Perl.
• Design and build differentiating feature sets that continue to expand product capabilities, both on premises and in the cloud.
• Work with development, test, service, and support engineers to develop tactical solutions for customer issues.
• May design and develop automated test suites.
• May maintain lab equipment.
Required Skills and/or Experience:
• Bachelor’s degree in Computer Science, Information Technology, or related field of study required.
• 5-10 years related industry experience required.
• 5+ years software development in C or C++ is required.
• 3-5 years’ experience working in a Linux environment is required.
• Experience in writing scripts: Perl, shell, bash, and/or other scripting tools is required.
• Experience with debugging tools such as GDB is required.
• Experience with source control and shared build environments is required.
Bito is a startup that is using AI (ChatGPT, OpenAI, etc) to create game-changing productivity experiences for software developers in their IDE and CLI. Already, over 100,000 developers are using Bito to increase their productivity by 31% and performing more than 1 million AI requests per week.
Our founders have previously started, built, and taken a company public (NASDAQ: PUBM), worth well over $1B. We are looking to take our learnings, learn a lot along with you, and do something more exciting this time. This journey will be incredibly rewarding, and is incredibly difficult!
We are building this company with a fully remote approach, with our main teams for time zone management in the US and in India. The founders happen to be in Silicon Valley and India.
We are hiring a DevOps Engineer to join our team.
Responsibilities:
- Collaborate with the development team to design, develop, and implement Java-based applications
- Perform analysis and provide recommendations for Cloud deployments and identify opportunities for efficiency and cost reduction
- Build and maintain clusters for various technologies such as Aerospike, Elasticsearch, RDS, Hadoop, etc
- Develop and maintain continuous integration (CI) and continuous delivery (CD) frameworks
- Provide architectural design and practical guidance to software development teams to improve resilience, efficiency, performance, and costs
- Evaluate and define/modify configuration management strategies and processes using Ansible
- Collaborate with DevOps engineers to coordinate work efforts and enhance team efficiency
- Take on leadership responsibilities to influence the direction, schedule, and prioritization of the automation effort
Requirements:
- Minimum 4+ years of relevant work experience in a DevOps role
- At least 3+ years of experience in designing and implementing infrastructure as code within the AWS/GCP/Azure ecosystem
- Expert knowledge of any cloud core services, big data managed services, Ansible, Docker, Terraform/CloudFormation, Amazon ECS/Kubernetes, Jenkins, and Nginx
- Expert proficiency in at least two scripting/programming languages such as Bash, Perl, Python, Go, Ruby, etc.
- Mastery in configuration automation tool sets such as Ansible, Chef, etc
- Proficiency with Jira, Confluence, and Git toolset
- Experience with automation tools for monitoring and alerts such as Nagios, Grafana, Graphite, Cloudwatch, New Relic, etc
- Proven ability to manage and prioritize multiple diverse projects simultaneously
What do we offer:
At Bito, we strive to create a supportive and rewarding work environment that enables our employees to thrive. Join a dynamic team at the forefront of generative AI technology.
· Work from anywhere
· Flexible work timings
· Competitive compensation, including stock options
· A chance to work in the exciting generative AI space
· Quarterly team offsite events
Roles and Responsibilities:
- Proven experience in Java 8, Spring Boot, Microservices and API
- Strong experience with Kafka, Kubernetes
- strong experience in using RDBMS (Mysql) and NoSQL.
- Experience working in Eclipse or Maven environments
- Hands-on experience in Unix and Shell scripting
- hands-on experience in fine-tuning application response and performance testing.
- experience in Web Services.
- Strong analysis and problem-solving skills
- Strong communication skills, both verbal and written
- Ability to work independently with limited supervision
- Proven ability to use own initiative to resolve issues
- Full ownership of projects and tasks
- Ability and willingness to work under pressure, on multiple concurrent tasks, and to deliver to agreed deadlines
- Eagerness to learn
- Strong team-working skills
Mandatory Skill set : C++ and Python - UNIX- Database - SQL or Postgres
Developer Role EXP : 3 to 5yrs
Location : Bangalore /Chennai/Hyderabad
1. Strong proficiency in C++ , with fair knowledge of the language specification (Telecom experience is preferred).
2. Proficient understanding of standard template library (STL): algorithms, containers, functions, and iterators
3. Must have experience on Unix platforms, should possess shell scripting skills.
4. Knowledge on compilers(gcc, g) and debugger (dbx). Knowledge of libraries and linking.
5. Good understanding of code versioning tools (e.g. Git, CVS etc.)
6. Able to write and understand python scripts (both python2 and python3)
7. Handson with logic implementation in python and should be familiar with list comprehension and is comfortable in integrating it with C++ and Unix scripts
8. Able to implement multithreading in both C++ and Python environment.
9. Familiar with Postgres SQL.
C++ developer with Python as secondary - 3 to 4 yrs exp / should be CW.
- Candidate should be able to write the sample programs using the Tools (Bash, PowerShell, Python or Shell scripting)
- Analytical/logical reasoning
- GitHub Actions
- Should have good working experience with GitHub Actions
- Repository/Workflow Dispatch, writing reusable workflows, etc
- AZ CLI commands
- Hands-on experience with AZ CLI commands
Title: Platform Engineer Location: Chennai Work Mode: Hybrid (Remote and Chennai Office) Experience: 4+ years Budget: 16 - 18 LPA
Responsibilities:
- Parse data using Python, create dashboards in Tableau.
- Utilize Jenkins for Airflow pipeline creation and CI/CD maintenance.
- Migrate Datastage jobs to Snowflake, optimize performance.
- Work with HDFS, Hive, Kafka, and basic Spark.
- Develop Python scripts for data parsing, quality checks, and visualization.
- Conduct unit testing and web application testing.
- Implement Apache Airflow and handle production migration.
- Apply data warehousing techniques for data cleansing and dimension modeling.
Requirements:
- 4+ years of experience as a Platform Engineer.
- Strong Python skills, knowledge of Tableau.
- Experience with Jenkins, Snowflake, HDFS, Hive, and Kafka.
- Proficient in Unix Shell Scripting and SQL.
- Familiarity with ETL tools like DataStage and DMExpress.
- Understanding of Apache Airflow.
- Strong problem-solving and communication skills.
Note: Only candidates willing to work in Chennai and available for immediate joining will be considered. Budget for this position is 16 - 18 LPA.
L2 Support
Location : Mumbai, Pune, Bangalore
Requirement details : (Mandatory Skills)
- Excell communication skills
- Production Support, Incident Management
- SQL ( Must have experience in writing complex queries )
- Unix ( Must have working experience on Linux operating system.
- Pearl/Shell Scripting
- Candidates working in the Investment Banking domain will be preferred
Position: ETL Developer
Location: Mumbai
Exp.Level: 4+ Yrs
Required Skills:
* Strong scripting knowledge such as: Python and Shell
* Strong relational database skills especially with DB2/Sybase
* Create high quality and optimized stored procedures and queries
* Strong with scripting language such as Python and Unix / K-Shell
* Strong knowledge base of relational database performance and tuning such as: proper use of indices, database statistics/reorgs, de-normalization concepts.
* Familiar with lifecycle of a trade and flows of data in an investment banking operation is a plus.
* Experienced in Agile development process
* Java Knowledge is a big plus but not essential
* Experience in delivery of metrics / reporting in an enterprise environment (e.g. demonstrated experience in BI tools such as Business Objects, Tableau, report design & delivery) is a plus
* Experience on ETL processes and tools such as Informatica is a plus. Real time message processing experience is a big plus.
* Good team player; Integrity & ownership





















