Profile - Sr. DevOps Engineer Experience: 5-8 Years The DevOps team at Druva is chartered with developing infrastructure code that is foundational in deployment and operations of Druva's Saas service. Devops team additionally enables Druva engineers to rapidly innovate by building tools that provide a simple, fast and robust developer experience by simulating a cloud in a box. Our focus centers on creating tooling that streamlines development, testing, building, integration, packaging, and deployment of mutable and immutable artifacts. DevOps engineers are involved in the full life cycle of the application. You will be responsible for the design and implementation of the application’ build, release, deployment and configuration activities as well as contribute to defining the deployment architecture of Druva's saas service. You will automate and streamline our operations and processes involved in those activities. You will leverage existing tools and technologies, preferably the open source ones, to build infrastructure applications needed to support deployment, operation and monitoring of Druva's saas service. At the same time, you won't limit yourself from building such tools whenever off-the-shelf tools aren't adequate. You will continuously focus on improving the deployment design and troubleshoot and resolve issues in our dev, test and production environments. Qualifications - 5-8 years experience in designing and developing large scale infrastructure applications that help deploy and smoothly operate a SAAS service. - Experience with wide variety of open source tools and technologies relevant to deployment on a cloud, including deployment frameworks like docker swarm and containers, Kubernetes or equivalent, is a must. - Experience with configuration management using Salt, Puppet, Chef or equivalent - Experience working with AWS is an added advantage. - Strong expertise with bash scripting, python or equivalent. - Strong grasp of automation tools and ability to develop them as needed. - Experience with continuous integration and continuous deployment (CI/CD) and associated automation
We are looking for a Python Developer to join our small team. We are based in the USA and the job will be remote in India. You can work from home or where ever you wish. We are looking for someone who has the following qualities: • Analytical skills and creative reasoning • Appreciate the value of documentation and good coding style • Have gumption and want to do well from within • Not afraid to speak up(politely) when they think a better approach exist • Willingness to grind out a result even when it’s not the most exciting task Required technical skills: • Python, SQL, Regex, REST/API • Working knowledge of Linux Nice to have technical skills\knowledge: • C, C++ • Flask • PHP • AWS • Linux kernel dev • MongoDB • Pen testing • Networking theory, incl, routing, switching, IPsec, SDN, BGP
Who are we? BlueOptima is the only company providing reliable, objective software development productivity metrics. The technology has been implemented by some of the world’s largest organizations including insurance companies, telecoms and seven of the world’s top ten Universal Banks. BlueOptima is a Private Limited company, incorporated in 2007 and based in London. BlueOptima is an Equal Opportunities employer. Whom are we looking for? We are hiring for a System Administrator to join our growing company and be a part of our success story. We are looking for a talented System Administrator with deep experience in AWS and managing large number of Linux servers used for hosting purposes, to join our research and development team in India. What does the role involve? Manage AWS services used, covering following (but not limited to) ec2 instances and network configurations IAM policies. Security perimeter on ec2, s3 and VPC Design, install, configure, and maintain PostgreSQL database clusters. Respond to and resolve database access, performance, and other issues. Collaborate with engineering teams to optimize database usage. Ability to work non-peak hours when needed and for deployment and other configuration changes which might affect service availability. Using tools like Puppet and Terraform for application deployment and management. Maintaining monitoring tools, e.g. Cacti. Configuring and Maintaining ELK server for log aggregation and alerting. Maintaining VPN and NIDS services running in VPC The role is based in India. Why work for us? Compensation is higher than market salary Stimulating challenges that fully use your skills, e.g. real-world technical problems to which solution cannot simply be found on the internet Working alongside other passionate, talented engineers Hardware of your choice (e.g. Macbook Pro or Dell XPS) Our fast-growing company offers the potential for rapid career progression More about working at BlueOptima Required Skills B.Sc. in Computer Science, Information Systems or related technical degree; or equivalent combination of education and experience. Proficiency in scripting languages like Python, Shell, etc. Experience with ELK or Splunk. With strong understanding of both structured and unstructured data. Experience with using automation tools like Puppet and Terraform. Excellent oral and written communication skills Good understanding on Networking and packet capturing and analysis Ability to troubleshoot problems remotely and VPN/network connectivity issues from different OS environments
Permanent positions with a Product Client. Essential Skills: 3+ years’ experience of Windows Server Management 3+ years’ experience in Microsoft Azure Administration, Deployment, Development and Operations Networking (Azure networking, on-premise) Firewalls & VPN Experience in Linux administration Continuous Integration on VSTS in particular Security administration, e.g. setup of appropriate authorisation groups, roles and permissions structures Security (SSL, PKI, SSO, SAML) Experience of Azure ARM based provisioning using Windows Powershell scripting and templates Experience of Azure IaaS and PaaS offerings Experience with automation/configuration management using either Puppet, Chef or runbook ability to use a wide variety of open source technologies and cloud services (experience with Azure is required) Application Deployment tools(CI/CD) and their strategies. Experience building or managing applications from the Application layer down Exposure to security concepts / best practices Familiarity with one or more version control systems mainly Git, source tree Advantageous: Experience of NoSQL technology (i.e. CouchBase) Desired State Configuration and deployment (Puppet) Experience in Container orchestration framework like docker will be definite plus Experience of Azure solution deployment and development Interest in, or experience of, mobile solution development (i.e. worked as part of a team to deliver a mobile application) Azure Service Fabric Visual Studio Team Services for build and deployment
Times Internet Limited, (TIL), is the internet and mobile venture of India's largest media house - the Times Group. TIL websites are amongst the fastest growing web-based networks worldwide. TIL has led the internet revolution in India. It has emerged as the India's foremost web entity, running diverse portals and niche websites. Times Internet is the largest Indian digital network in India, with over 37 million unique visitors per month. Founded in 1999, it owns and operates over 30 properties in multiple domains, such as news, entertainment, commerce, local, and mobile encompassing telecom, e-commerce, online advertisement solutions, communities, events etc. Indiatimes.com is India's most preferred online and mobile value-added services destination for millions of users looking for rich and diverse digital content. It commands more than 1 Billion page views per month. Position: DevOps Engineer Location: Noida Position Summary: o Provide technical excellence in building and maintaining an industry standard class Continuous Integration and Delivery process. o Take ownership of the existing CI/CD systems and work on the enhancement and optimization to meet business needs. o Design, develop and maintain proper documentation to enable easier adoption of the best practices and implementation of designs. o Communicate effectively with developers, peers and other stakeholders in the organization. o Utilize analytical and debugging skills in determining the root cause of problems, and demonstrate ability to multi-task and re-prioritize responsibilities based on changing requirements (such as urgent customer problems) o Expected to work in SAFE Agile Development process o Automation of configuration management and deployments – Eg: Chef, Puppet, jenkins etc o Experience in designing and developing DevOps automation tools and practices. o Hands-on experience of SCM tools (e.g. SVN, TFS, git) for managing, labeling, branching and merging source code o Excellent written and verbal communication skills. o Must be good in global stakeholder management to drive integration activities. Expectations (Skill set) System Administration + Unix internals. OS level networking Programming skills on any one (Python + bash + perl + ruby) Continuous integration and continuous deployments (Jenkins etc) HTTP nitty gritty + any one of the web server (apache + Nginix etc.) Application servers any one (Tomcat + jboss + Glassfish + web sphere + web logic etc.)
Looking for skilled DevOps Engineer with understanding of SDLC and Release Cycles. Technically competitive knowledge on JenKins, Puppet or Chef or GIT or SVN.
-Design, implement and maintain a scalable, flexible platforms and infrastructure - System Administration: OS and application installation and maintenance - Capacity Planning- Work with software engineers and architects to design and build fault tolerant systems in an public and private clouds. - Work with Test teams to design and perform automated integration and performance testing. - Develop continuous integration tools and processes that streamline testing & deployment of a large. -scale data processing grid on public and private clouds. - Develop automation scripts, daemons, and APIs to reduce operational overhead