- 8+ years of experience in software development
- 4+ years of recent hands-on experience in architecting and building complex solutions that run in SaaS/PaaS environments, especially on AWS cloud leveraging SaaS based Microservices development coupled with Distributed Caching & Message Queuing.
- Should have experience in developing solution architecture and/or evaluate architectural alternatives for private, public and hybrid cloud models, including IaaS, PaaS, and other cloud services
- Should have excellent knowledge of cloud architecture and implementation features (OS, multi-tenancy, virtualization, orchestration, elastic scalability)
- Should have experience with Full Stack development with experience in technology stacks/frameworks like JAVA, Springboot, Python, Redis, SQL, NoSQL and Graph DBs
- Good to have Architected solutions that handle Big Data and should be proficient in data analytics
- Must have Expert level proficiency in Design / Architectural patterns, data structures and algorithms
- Must demonstrate knowledge of DevOps tool chains and processes
- Experience in web-based application migration from on-premise to SaaS model is a big plus.
- Experience of Integration patterns and associated best practice(e.g. Web Services, REST API's, Pub/Sub, MOM)
- Excellent knowledge and hands-on experience in Web services related, functionally decomposed architecture, Load Balancing of Web Services and applications, designing multi-tenant systems, Clustering and sharding of data, microservices architecture / design patterns, and throttling and performance management of such services
Enquero is your go-to business technology team, a consulting company of innovative doers. We are serious enough to understand your enterprise challenges and cool enough to know how the newest technologies can be applied as solutions. Our consultants have a deep appreciation for the real problems companies face and the experience and skills to back up the solutions envisioned.
Our goal? To enable agility in your company. We cut down all the lengthy assessments, requirements gathering and scoping exercises to help you find a real solution that we can quickly implement for you. We are intuitively innovative. We know that today’s technology innovation is not about research but is fueled by utilizing a creative mix of cutting edge products, accelerators and creative know-how to solve problems fast and economically – because guess what? Technology is changing swiftly. We utilize an onshore delivery model that enables us to have flat teams with complementary skills that work in tandem with each other and you to deliver real outcomes. Couple that with our products, tools and accelerators and you have solutions delivered to your most challenging enterprise problems faster than traditional SIs and technology consultants.
- Design technical specifications for RPA (Automation Anywhere) that meets the requirements and handled all the non functional requirements of concurrency, scalability, security requirements, restart and recovery.
- Develops and configures automation processes as per the technical design document to meet the defined requirements. Works on the coding the more complicated automations or reusable components, and delegates and mentors junior developers for the less complex components.
- Develops new processes/tasks/objects using core workflow principles that are efficient, well structured, maintainable and easy to understand.
- Complies with and helps to enforce design and coding standards, policies and procedures.
- Ensures documentation is well maintained.
- Ensures quality of coded components by performing thorough unit testing.
- Works collaboratively with test teams during the Product test and UAT phases to fix assigned bugs with quality.
- Reports status, issues and risks to tech leads on a regular basis
- Improves skills in automation products by completing automation certification.
- Mentors junior developers and performs code reviews for quality control.
Bachelor degree in Engineering / Computer Science
- 5- 8 years of IT experience and having good understanding of programming concepts. Should be from a programming background on any coding language (.Net, Java).
- Working experience in RPA for a minimum of 2 years and having project experience of a minimum 3 RPA implementations.
- Understands development methodology and lifecycle
- Should be trained on RPA tools (Automation Anywhere).
- Self-motivated, team player, action and results oriented.
- Well organized, good communication and reporting skills.
Expert troubleshooting skills.
Expertise in designing highly secure cloud services and cloud infrastructure using AWS
(EC2, RDS, S3, ECS, Route53)
Experience with DevOps tools including Docker, Ansible, Terraform.
• Experience with monitoring tools such as DataDog, Splunk.
Experience building and maintaining large scale infrastructure in AWS including
experience leveraging one or more coding languages for automation.
Experience providing 24X7 on call production support.
Understanding of best practices, industry standards and repeatable, supportable
Knowledge and working experience of container-based deployments such as Docker,
Terraform, AWS ECS.
of TCP/IP, DNS, Certs & Networking Concepts.
Knowledge and working experience of the CI/CD development pipeline and experience
of the CI/CD maturity model. (Jenkins)
Knowledge and working experience
Strong core Linux OS skills, shell scripting, python scripting.
Working experience of modern engineering operations duties, including providing the
necessary tools and infrastructure to support high performance Dev and QA teams.
Database, MySQL administration skills is a plus.
Prior work in high load and high-traffic infrastructure is a plus.
Clear vision of and commitment to providing outstanding customer service.
Shiprocket is a logistics platform which connects Indian eCommerce SMBs with logistics players to enable end-to-end solutions.
Our innovative data-backed platform drives logistics efficiency, helps reduce cost, increases sales throughput by reducing RTO and improves post order customer engagement and experience.
Our vision is to power all logistics for the direct commerce market in India including first mile, linehaul, last mile, warehousing, cross border and O2O.
We are seeking an experienced DevOps Engineer across product lines.
- Deploy, automate, maintain and manage AWS cloud-based production system. Ensure the availability, performance, scalability and security of productions systems.
- Build, release and configuration management of production systems.
- System troubleshooting and problem solving across platform and application domains.
- Suggesting architecture improvements, recommending process improvements.
- Ensuring critical system security through the use of best in class cloud security solutions.
- DevOps: Solid experience as a DevOps Engineer in a 24x7 uptime Amazon AWS environment, including automation experience with configuration management tools.
- Scripting Skills: Strong scripting (e.g. Python, shell scripting) and automation skills.
- Monitoring Tools: Experience with system monitoring tools (e.g. Nagios).
- Problem Solving: Ability to analyze and resolve complex infrastructure resource and application deployment issues.
- DB skills: Basic DB administration experience (RDS, MongoDB), experience in setting up and managing of AWS Aurora databases.
- ELK: Proficient in ELK setup
- GitHub: Experienced in maintain and administering of GitHub
- Accountable for proper backup and disaster recovery procedures.
- Experience with Puppet, Chef, Ansible, or Salt
- Professional commitment to high quality, and a passion for learning new skills.
- Detail-oriented individual with the ability to rapidly learn new concepts and technologies.
- Strong problem-solving skills, including providing simple solutions to complex situations.
- Must be a strong team player with the ability to communicate and collaborate effectively in a geographically disperse working environment.
We are looking for an excellent experienced person in the Dev-Ops field. Be a part of a vibrant, rapidly growing tech enterprise with a great working environment. As a DevOps Engineer, you will be responsible for managing and building upon the infrastructure that supports our data intelligence platform. You'll also be involved in building tools and establishing processes to empower developers to
deploy and release their code seamlessly.
The ideal DevOps Engineers possess a solid understanding of system internals and distributed systems.
Understanding accessibility and security compliance (Depending on the specific project)
User authentication and authorization between multiple systems,
servers, and environments
Integration of multiple data sources and databases into one system
Understanding fundamental design principles behind a scalable
Configuration management tools (Ansible/Chef/Puppet), Cloud
Service Providers (AWS/DigitalOcean), Docker+Kubernetes ecosystem is a plus.
Should be able to make key decisions for our infrastructure,
networking and security.
Manipulation of shell scripts during migration and DB connection.
Monitor Production Server Health of different parameters (CPU Load, Physical Memory, Swap Memory and Setup Monitoring tool to
Monitor Production Servers Health, Nagios
Created Alerts and configured monitoring of specified metrics to
manage their cloud infrastructure efficiently.
Setup/Managing VPC, Subnets; make connection between different zones; blocking suspicious ip/subnet via ACL.
Creating/Managing AMI/Snapshots/Volumes, Upgrade/downgrade
AWS resources (CPU, Memory, EBS)
The candidate would be Responsible for managing microservices at scale maintain the compute and storage infrastructure for various product teams.
Strong Knowledge about Configuration Management Tools like –
Ansible, Chef, Puppet
Extensively worked with Change tracking tools like JIRA and log
Analysis, Maintaining documents of production server error log's
Experienced in Troubleshooting, Backup, and Recovery
Excellent Knowledge of Cloud Service Providers like – AWS, Digital
Good Knowledge about Docker, Kubernetes eco-system.
Proficient understanding of code versioning tools, such as Git
Must have experience working in an automated environment.
Good knowledge of Amazon Web Service Architects like – Amazon EC2, Amazon S3 (Amazon Glacier), Amazon VPC, Amazon Cloud Watch.
Scheduling jobs using crontab, Create SWAP Memory
Proficient Knowledge about Access Management (IAM)
Must have expertise in Maven, Jenkins, Chef, SVN, GitHub, Tomcat, Linux, etc.
Candidate Should have good knowledge about GCP.
B-Tech-IT/M-Tech -/MBA- IT/ BCA /MCA or any degree in the relevant field
EXPERIENCE: 2-6 yr
Red Hat Specialist on OpenShift administration preferred
Minimum 5 years of experience inn managing applications deployed to a Platform-as-a-Service PaaS platforms like RH OpenShift, etc
Strong experience with configuration management using Ansible, etc
Strong experience working with distributed source control systems like Git, including branching and merging
Good knowledge on supporting critical web-applications with high-availability and highly scalable infrastructure
Good experience on monitoring tools (preferrably Nagios)
Strong knowledge on orchestration platforms like Kubernetes is a must
Considerable knowledge on cloud networking.
Cross-skilled across multiple cloud providers like AWS, Azure, GCP etc, (GCP - Preferred)
Maintain and improve monitoring, metrics for the cloud infrastructure
Ensure application health and performance with browser-based monitoring software like Dynatrace
Automate and improve build systems using Jenkins, Apache Maven, and Artifactory and deploy systems using tools like Ansible and OpenShift
Collaborate with developers to migrate applications to the Red Hat OpenShift Platform-as-a-Service PaaS offering
Should work on cloud technologies along with latest devops technologies like Openshift, Docker, Kubernetes
Collaborate in an agile team with Product Owners, Scrum Masters, System Architects, Development Teams, QA Engineers, other DevOps Engineers and Users.
Create CI/CD pipelines for various different applications.
You will be deploying and configuring containers and automating releases.
- 15+ years of Hands-on technical application architecture experience and Application build/ modernization experience
- 15+ years of experience as a technical specialist in Customer-facing roles.
- Ability to travel to client locations as needed (25-50%)
- Extensive experience architecting, designing and programming applications in an AWS Cloud environment
- Experience with designing and building applications using AWS services such as EC2, AWS Elastic Beanstalk, AWS OpsWorks
- Experience architecting highly available systems that utilize load balancing, horizontal scalability and high availability
- Hands-on programming skills in any of the following: Python, Java, Node.js, Ruby, .NET or Scala
- Agile software development expert
- Experience with continuous integration tools (e.g. Jenkins)
- Hands-on familiarity with CloudFormation
- Experience with configuration management platforms (e.g. Chef, Puppet, Salt, or Ansible)
- Strong scripting skills (e.g. Powershell, Python, Bash, Ruby, Perl, etc.)
- Strong practical application development experience on Linux and Windows-based systems
- Extra curricula software development passion (e.g. active open source contributor)
GaragePlug is an all-in-one cloud platform that redefines a customer’s journey with automotive service businesses. GaragePlug harnesses the power of digitalization to help automotive service businesses achieve immense operational efficiency and build a highly impressionable customer experience that is sure to win any customer. GaragePlug aims to bring technological disruption to the automotive after-sales service & repair industry by taking the industry one step closer to the future. Currently, GaragePlug is trusted by hundreds of brands across 15+ countries and continues to expand across the world!
At least 10 years of experience.
Consultant Role Description:
- Tech Lead with strong Expertise in Core Java & Spring boot.
- Hands-on in developing Microservices, Eureka service registry, and Docker, make use of OAuth for API Auth, Kubernetes, and Grafana.
- Knowledge of Cloud Technologies, AWS, CI/CD, Jenkins, and Testing methodologies is preferred.
- Hands-on to do API Documentation. Hands-on to invoke 3rd party REST / SOAP API invocations through REST template.
- Ability to design and architect solutions.
- Direct the integration of technical and engineering activities within projects.
- Knowledge of DevOps practices and tools.
- Ability to formulate and deliver solutions to complex problems in a large and diverse technology landscape with multiple teams.
- Recruit, coach, and mentor the best engineering talent
Preferable Location(s): Bengaluru, India Work Type: Part Time
Apply through this link
Artifex HR is looking to hire an HR recruiter to manage our recruitment cycle. The job involves identifying potential hires, evaluating, interviewing candidates and post recruitment checks.
- Sourcing candidate CVs from various job portals by posting ads and following up
- Placing job advertisements
- Using company’s database/ reference/ networks & teams
- Pre-screening activities before scheduling of interviews
- Co-ordinating between potential candidates during subsequent rounds
- Making referral checks for the new hires before they’ve been placed with the company
- Finalizing salaries and sending out offer letters to selected candidates
- Ensuring that the candidates join and are given a date of joining
- 6 months to 2 years of work experience as an IT Recruiter or similar role
- Experience with IT recruitment is preferred
- Degree in Human Resources Management, Organizational Psychology or relevant field
- Experience with sourcing techniques and familiarity with handling job portals
- Excellent verbal and written communication skills
EXPERIENCE: 6 months to 2 years
SALARY: Up to 20,000 per month
● Research, propose and evaluate with a 5-year vision, the architecture, design, technologies,
processes and profiles related to Telco Cloud.
● Participate in the creation of a realistic technical-strategic roadmap of the network to transform
it to Telco Cloud and be prepared for 5G.
● Using your deep technical expertise, you will provide detailed feedback to Product Management
and Engineering, as well as contribute directly to the platform code base to enhance both the
Customer experience of the service, as well as the SRE quality of life.
● The individual must be aware of trends in network infrastructure as well as within the network
engineering and OSS community. What technologies are being developed or launched?
● The individual should stay current with infrastructure trends in the telco network cloud domain.
● Be responsible for the Engineering of Lab and Production Telco Cloud environments, including
patches, upgrades, and reliability and performance improvements.
Required Minimum Qualifications: (Education and Technical Skills/Knowledge)
● Software Engineering degree, MS in Computer Science or equivalent experience
● Years of experiences as an SRE, DevOps, Development and/or Support related role
● 0-5 years of professional experience for a junior position
● At least 8 years of professional experience for a senior position
● Unix server administration and tuning : Linux / RedHat / CentOS / Ubuntu
● You have deep knowledge in Networking Layers 1-4
● Cloud / Virtualization (at least two): Helm, Docker, Kubernetes, AWS, Azure, Google Cloud,
OpenStack, OpenShift, VMware vSphere / Tanzu
● You have in-depth knowledge of cloud storage solutions on top of AWS, GCP, Azure and/or
on-prem private cloud, such as Ceph, CephFS, GlusterFS
● DevOps: Jenkins, Git, Azure DevOps, Ansible, Terraform
● Backend Knowledge Bash, Python, Go (other knowledge of Scripting Language is a plus).
● PaaS Level solutions such as Keycloak for IAM, Prometheus, Grafana, ELK, DBaaS (such as MySQL,
About the Organisation:
The team at Coredge.io is a combination of experienced and young professionals alike having
many years of experience in working with Edge computing, Telecom application development
and Kubernetes. The company has continuously collaborated with the open source community,
universities and major industry players in furthering its goal of providing the industry with an
indispensable tool to offer improved services to its customers. Coredge.io has a global market
presence with its offices in US and New Delhi, India.
Location – Pune
Experience - 1.5 to 3 YR
Payroll: Direct with Client
Salary Range: 3 to 5 Lacs (depending on existing)
Role and Responsibility
• Good understanding and Experience on AWS CloudWatch for ES2, Amazon Web Services, and Resources, and other sources.
• Collect and Store logs
• Monitor and Store Logs
• Log Analyze
• Configure Alarm
• Configure Dashboard
• Preparation and following of SOP's, Documentation.
• Good understanding AWS in DevOps.
• Experience with AWS services ( EC2, ECS, CloudWatch, VPC, Networking )
• Experience with a variety of infrastructure, application, and log monitoring tools ~ Prometheus, Grafana,
• Familiarity with Docker, Linux, and Linux security
• Knowledge and experience with container-based architectures like Docker
• Experience on performing troubleshooting on AWS service.
• Experience in configuring services in AWS like EC2, S3, ECS
• Experience with Linux system administration and engineering skills on Cloud infrastructure
• Knowledge of Load Balancers, Firewalls, and network switching components
• Knowledge of Internet-based technologies - TCP/IP, DNS, HTTP, SMTP & Networking concepts
• Knowledge of security best practices
• Comfortable 24x7 supporting Production environments
• Strong communication skills