
About Qualys
About
Connect with the team
Similar jobs
Job Opportunity: AWS Infrastructure Engineer
- Location: Anywhere, Permanent Remote
- Work Mode: Remote, Work from Home
- Payroll Company: Talpro India
- Experience Required: 4+ Years
- Notice Period: Immediate Joiner Only
Key Responsibilities:
- Cloud Infrastructure Design & Management:
- • Architect, deploy, and maintain AWS services like EC2, S3, VPC, RDS, IAM, Lambda
- • Build secure, scalable cloud environments on AWS; Azure/GCP exposure is a plus
- Security & Compliance:
- • Implement security best practices using IAM, GuardDuty, Security Hub, WAF
- • Apply patching and security updates for Linux and Windows systems
- Networking & Connectivity:
- • Configure and troubleshoot AWS networking (VPC, TGW, Route 53, VPNs, Direct Connect)
- • Manage hybrid environments and URL filtering solutions
- Server & Service Optimization:
- • Optimize Apache, NGINX, MySQL/PostgreSQL on Linux and Windows platforms
- • Ensure server health, availability, and performance
- Firewall & Access Control:
- • Hands-on with physical firewalls like Palo Alto and Cisco
- Support & Integration:
- • Provide L2/L3 support and ensure quick incident resolution
- • Collaborate with DevOps, application, and database teams
- Automation & Monitoring:
- • Automate using Terraform, CloudFormation, Bash, or Python
- • Monitor and optimize using CloudWatch, Trusted Advisor, and Cost Explorer
Must-Have Skills:
- 4+ years of hands-on experience with AWS Cloud
- System administration for both Linux and Windows
- Expertise in AWS networking and security
- Proficiency with Terraform or CloudFormation
- Scripting skills in Bash or Python
- Experience managing Apache, NGINX, MySQL/PostgreSQL
- Familiarity with Palo Alto or Cisco firewalls
Knowledge of AWS monitoring tools like CloudWatch and Trusted Advisor
Good to Have Skills:
- Exposure to multi-cloud environments like Azure or GCP
- RHCSA or MCSE certification
- Strong collaboration and DevOps integration skills
Certifications Required:
- AWS Certified Solutions Architect (Mandatory)
- RHCSA or MCSE (Preferred)
Summary:
We are looking for an AWS Infrastructure Engineer to manage and secure scalable cloud environments. This role requires hands-on AWS experience, strong system admin skills, and automation expertise. If you’re certified, skilled in modern cloud tooling, and ready to work in a dynamic tech environment, we’d love to connect with you.
Location - Hyderabad
Technical Expertise
Mandatory:
- Should be a team player with at least 8 years of provensystem administrator/pre-sales experience.
- Should have deep technical knowledge and real-world usage ofapplication and operating system environments.
- Should have strong knowledge of windows/Linux operatingsystems.
- Should have knowledge on networking concepts, network devices(VPN), security concepts (MFA), network protocols, filesystems & AAA.
- Should have strong knowledge in any one Desktop Virtualization Technologies (Citrix Apps & Desktop / VMware Horizon / WVD/ MS RDP etc.).
- Should be proactive, self-motivated and should be able to partnerclosely with the sales Account Manager.
- Should have ability to map customer’s requirement to thecompany product.
- Good to have expereince in RFP.
- Should be aware about Telugu, Hindi, Engilsh language.
- Should be able to articulate the advantages and disadvantages of the offerings based on real- world example.
- Should have the ability to conduct conversations with prospective clients and convey both technical and business merits of our solutions. Should be willing to travel to customer location based on requirement. Troubleshoot issues, resolve technical challenges, and ensure successful deployment and integration of the digital workspace solution. Collaborate with cross functional teams, including support, engineering, and project management, to address customer needs and concerns.
Role and Responsibilities:
- Learn & certify in on the Company's products.
- Post certification, one should be able to articulate anddescribe company products during presentations & white-boarding sessions with customer.
- Good knowledge of the competitive product mainly in VDI andVPN (remote access) domain.
- Should conduct interactive demonstrations remotely or physically.
- Pre-sales Account management and presentation.
- Solution Designing, sizing, and deployment of Accopsproducts and related 3rd party products.
- Prepare competitive study documents andarchitecture documents.
- Build, improve and maintain high standard of pre-sales andpost-sales support.
- Create knowledgebase around the product, including KBarticles, tutorial videos creation, FAQs, best practices documents.
- Manage customer expectations and demonstrate a strongfollow-up on customer’s inquiries.
- Give technical training to partner’s sales and pre-sales team.
- Industry Knowledge
- Understanding of industry trends, challenges, and best practices related to digital workspaces, remote work, and workforce transformation.
- Familiarity with compliance regulations, security considerations, and data privacy requirements in relation to digital workspace solutions.
- Good to have:
- Pre-sales experience in a Startup preferably in the VDI,VPN, MFA, MDM, Azure, AWS etc.
- Prior knowledge of Accops products.
- Knowledge in solution designing/ Solution architecture.
- Customer interactions at management level.
- Knowledge of windows server/windows
- 10/RDS license.
- Prior experience in drafting RFP and understanding about process in RFP and GEM portal
- Implementation of one or more of products mentioned in Technical skill requirement.

BRIEF DESCRIPTION:
At-least 1 year of Python, Spark, SQL, data engineering experience
Primary Skillset: PySpark, Scala/Python/Spark, Azure Synapse, S3, RedShift/Snowflake
Relevant Experience: Legacy ETL job Migration to AWS Glue / Python & Spark combination
ROLE SCOPE:
Reverse engineer the existing/legacy ETL jobs
Create the workflow diagrams and review the logic diagrams with Tech Leads
Write equivalent logic in Python & Spark
Unit test the Glue jobs and certify the data loads before passing to system testing
Follow the best practices, enable appropriate audit & control mechanism
Analytically skillful, identify the root causes quickly and efficiently debug issues
Take ownership of the deliverables and support the deployments
REQUIREMENTS:
Create data pipelines for data integration into Cloud stacks eg. Azure Synapse
Code data processing jobs in Azure Synapse Analytics, Python, and Spark
Experience in dealing with structured, semi-structured, and unstructured data in batch and real-time environments.
Should be able to process .json, .parquet and .avro files
PREFERRED BACKGROUND:
Tier1/2 candidates from IIT/NIT/IIITs
However, relevant experience, learning attitude takes precedence

About the job
We focus on client adoption of disruptive technologies, technology architecture and providing specialized skills related to integration, custom software engineering, testing, application modernization, agile and more. We help our clients with the most complex projects including working in open web platforms, DevOps platforms as well as intelligent Computing and Architecture enhancement.
We are looking for hands on, smart thinking Applications developer to join our fast-growing team of talented professionals. You will have the opportunity to work on large enterprise solutions and deliver solutions that deliver business performance for our customers.
WORK YOU’LL DO
- Design and develop microservices/APIs using either Java/Spring boot, ISTIO, Kubernetes, Docker, CI / CD Pipelines
- You would scale micro services using Kafka or similar messaging systems.
- Collaborate with clients, architects, and application architectures to understand the operational objectives and purpose of the future system integration.
- Understand the points of integration between the different systems and highlight the potential risks associated with the delivery of solutions
- Collaborate with Functional Designers and Developers in order to find best solutions
- Produce detailed functional and technical specifications.
- Assist in producing solutions with threat assessments and associated security awareness.
WHAT WE ARE LOOKING FOR
- 5- 8 years of hands-on experience designing and developing microservices using either Java/Spring Boot
- 3-5 years of experience with system integration
- Minimum of 3 years of relevant experience with API concepts and technologies such as REST, JSON, XML, SOAP, YAML, GraphQL, and Swagger
- Experience developing within agile methodology using CI/CD pipeline
- Experienced in 3-tier, n-tier, cloud computing, microservices architectures and SOA.
- Good knowledge of integration architectures
- Experience supporting and/or implementing complex integration projects
- Excellent client management skills
- Experience with Data modelling would be an asset
QUALIFICATIONS
- Experience of working in an Agile Environment
- Ability to drive design from Stories and Requirements
- Adept at UML, Design Patterns, Reusable Services Development,
- Knowledge on Scaling Microservices

WHAT WE DO?
http://karmalife.ai/">KarmaLife (Onionlife Private Limited) is a financial solution that addresses the unmet liquidity needs of India’s gig and contract workforce. KarmaLife captures real-time data associated with work and financial transactions, and applies machine learning & AI to viably provide relevant, affordable and easy-to-use financial services, including credit, savings and insurance. KarmaLife partners with employers and aggregators of blue collar workers to package these services as financial wellness benefits that in turn enhance worker productivity and retention.
To learn more about the solution, watch this 3 min https://www.youtube.com/watch?v=EUbj29GAOeA&feature=youtu.be">video
WHAT YOU SHALL DO?
We are looking for someone who can provide astute technical support. Giving such assistance means that it's all about solving problems from the moment they call the customer service line until the caller’s issues are resolved. This is a high-impact role that will provide a well-rounded perspective on the Finance Products at http://karmalife.ai/">KarmaLife and brings with it, the opportunity to collaborate and interact with multi-functional teams inside Karmalife and at the partner organization.
- Performs diagnostics and maintenance of Applications
- Configures and troubleshoots backend Applications
- Performs customer request/problem identification and follows defined procedures to resolve correctly.
- Does incident tracking and takes necessary action to resolve the issue
- Documents troubleshooting efforts and customer information in data capture tool and when required, transfers call or promptly notifies responsible party for resolution
- You will have direct communication with client’s teams and will liaise closely with internal teams to reproduce and resolve issues.
- Managing communication channels, such as e-mail, phone and other support management applications.
- Maintaining high customer satisfaction ratings and service level agreements
- May complete and resolve non-call customer contact requests received by web or email
WHO WE ARE LOOKING FOR?
We are looking for someone Technical Support Engineer to provide enterprise-level assistance to our customers. You will diagnose and troubleshoot software and hardware problems and help our customers install applications and programs.Candidates who share our vision of building inclusive FinTech for India’s fast-growing gig economy.
An ideal candidate would have the following qualifications:
- Education in an IT background or 2+ years of working experience with Linux/Unix environments.
- Knowledge of AWS or other Public Cloud Platform
- Scripting with BASH or Python - intermediate level
- Log parsing and analysis
- Flexibility in using different technologies/platforms
An ideal candidate would have the following skills:
- Oral & written: To communicate technical issues, no matter the customer's background or experience level & to provide step-by-step technical help, both written and verbal.
- Problem articulation: Ability to ask+articulate customer problems & translate a solution.
- High on ownership: Motivated and enthusiastic to discover and learn new technologies + disciplined
- Team player: Is able to shift between coordination, cooperation and collaboration quickly.
5. Agility: Ability to quickly assess the situation and display a strong bias for action.
Location: Bangalore is preferred
Bonus: If you have experience working with B2B startups & technical client support roles
Perks of working at Karmalife:
- We offer competitive compensation, unparalleled growth & learning, and the upside of an early stage company.
- We take growth & learning seriously, you will have excellent mentors.
- Looking to own something from your early days , karmalife is your best bet then, we appreciate and reward ownership.
- Knowledge of Handling goods documentation as per GST law
- Invoice, GST, Ewaybill, HSN code
- Have Average communication skills - English (Vocal and Written)
- Basic computer skills in windows office especially word and excel
- Responsible attitude towards work ethics
- A quick learner, flexible and adaptable
- Able to work in rotational shifts, i.eespecially night shifts,

Position: QE Automation Engineer / SDET
Job Location: Pune (Work From Home) Till Pandemic.
Salary: As per Company Standard
Experience: 8 plus Years of Software Engineer Testing , 2 plus years of hands-on experience using Selenium and Cucumber.
Responsibility:
Skills needed for Automation SDETs are :
Excellent communication skills
Must have knowledge of –
Core Java
Selenium with standard Maven, TestNG/JUnit
Cucumber / BDD (Karate will also do)
Rest Assured (including Postman)
GitHub / Sourcetree / GitLab
Good to have skills are –
Jenkins
Shell / Groovy script
Any Cloud experience
Skills we can train on as per need –
GCP Basics
DB (brushup)
Skills needed for Site Reliability Engineers (SREs) are -
Must have –
Java / Python scripting
Shell / Groovy
Jenkins / Bamboo and related devops areas
Cloud experience
Good to have skills are –
GCP Intermediate skills
DB skills
EARLY JOINING CANDIDATES ARE MORE PREFFERED. SAY WITHIN 2 OR 3 WEEKS.
Will be developing mobile and web applications using latest technology. Should be good in analysing requirements and translate into applications. Good in understanding application flows.
Tech Skills –
MEAN (MongoDB, Express, Angular/React, Node.js) – Expert level - at least 2-3 full sized projects
Cloud technology – Familiar with using cloud technology (AWS, GCP etc) – Intermediate – Should be able to interact with the services from cloud (e.g. firebase etc)
HTML, CSS, Bootstrap, Java script – Intermediate level - Good understanding of concepts and should be able to realize application screens based on the UI provided by designers
Mobile technology – Hybrid (ionic, Cordova, capacitor, flutter), Native (Android, iOS) – Beginner – Should have understanding of concepts, good with basics
We are looking for a full-time remote DevOps Engineer who has worked with CI/CD automation, big data pipelines and Cloud Infrastructure, to solve complex technical challenges at scale that will reshape the healthcare industry for generations. You will get the opportunity to be involved in the latest tech in big data engineering, novel machine learning pipelines and highly scalable backend development. The successful candidates will be working in a team of highly skilled and experienced developers, data scientists and CTO.
Job Requirements
- Experience deploying, automating, maintaining, and improving complex services and pipelines • Strong understanding of DevOps tools/process/methodologies
- Experience with AWS Cloud Formation and AWS CLI is essential
- The ability to work to project deadlines efficiently and with minimum guidance
- A positive attitude and enjoys working within a global distributed team
Skills
- Highly proficient working with CI/CD and automating infrastructure provisioning
- Deep understanding of AWS Cloud platform and hands on experience setting up and maintaining with large scale implementations
- Experience with JavaScript/TypeScript, Node, Python and Bash/Shell Scripting
- Hands on experience with Docker and container orchestration
- Experience setting up and maintaining big data pipelines, Serverless stacks and containers infrastructure
- An interest in healthcare and medical sectors
- Technical degree with 4 plus years’ infrastructure and automation experience

- Write, test, debug and ship code and gather feedback on the scale, performance, security to incorporate back into the platform.
- Work with the founders to identify complex technical problems and solve them.
- Work with the product design and client experience development team to support
them with scalable services
- Feed into the overall mission and vision of the eParchi’s platform over the period of the coming months and years.
- An ability to perform well in a fast-paced environment
- Excellent analytical and multitasking skills.

