Managing cloud deployments & containers using docker Managing Linux - upgrades, application deployments, backup/restores, system tuning & performance Managing & monitoring web servers, queuing systems, big data stream processing, databases like cassandra etc., for security & performance Unix shell scripting
Responsibilities: ▪ Design, build, manage and operate infrastructure and configuration of all platform environments with a focus on automation and infrastructure as code. ▪ Design an architecture for distributed applications ▪ Design, build, manage and operate the infrastructure as a service layer (hosted and cloud-based platforms) that supports the different platform services. ▪ Design, build, manage and operate the continuous delivery framework and tools, and acting as a subject matter expert on CI/CD for developer teams. ▪ Write and build continuous delivery pipelines to manage and automate the life cycle of the different platform components. ▪ Develop a log analytics solution to provide logging-as-a-service to hosted applications based on open source solutions. ▪ Evaluate performance trends and expected changes in demand and capacity; and establish the appropriate scalability plans. ▪ Identify and troubleshoot any availability and performance issues at multiple layers of deployment, from hardware, operating environment, network and application. ▪ Recommend and maintain technology related policies and procedures. ▪ Identify and suggest various opportunities to improve efficiency and functionality. ▪ Implement data security and protection. Skills and Qualifications: ▪ 2+ years of experience relevant DevOps experience ▪ Track record of building complex CI/CD platforms to build, test, deploy and release software products ▪ Significant hands-on experience with designing Docker and Kubernetes clusters ▪ Fluency in shell scripting and CI/CD automation using Jenkins, Travis CI etc ▪ Understanding of integrating and working with No-SQL databases like Elasticsearch, MongoDB, DynamoDB and Bigtable ▪ Aptitude for fixing recurring issues by automating repeatable operational tasks ▪ Detail oriented personality, who does not lose sight of the big picture ▪ Thrive in a fast-paced, evolving, growing and dynamic environment
Lead Cloud Architect Do you cloud? Expertise driving infrastructure engineering & transform initiatives from deep-tech startups to 42,000U (1,000 42U racks) loaded enterprises. Have you tandem-jumped at least 50 times taking along a trusted business having on-prem workloads to safe-land onto a public cloud with a soft touchdown? Do you understand the innards of Apache web servers on Linux and Sharepoint Server Farms alike? Do you speak CloudFormation? Terraform? Do you speak JSON? Do you love automating everything possible leveraging Python / Powershell / Bash? Are you a voracious reader fascinated by the latest & greatest innovations on public cloud? Introduction We welcome *really unconventional* creative thinkers who can work in an agile, flexible environment. We are a flat organization with unlimited growth opportunities, and small team sizes – wherein flexibility is a must, mistakes are encouraged, creativity is rewarded, and excitement is required. This is an entrepreneurial technology leadership position that challenges the client status quo by helping them re-imagine the art of what is possible using consulting mindset and business strategy skills. This position requires fanatic iterative improvement ability - ability to architect a solution, code, research, code more, research more and code some more, rebuild and re-architect, you get the drift. This position is for hard-core-linux-geek-turned-infra-engineer-turned-cloud engineer-turned-passionate-cloud-architect. What we are NOT looking for: Buzzword Bozos (BB) or Certification Chasers (CC). What we seek is an AA (Awesome Attitude to Learn, Improve & Coach). Not just a BB or CC? Quick self-discovery test: When was the last time you thought about how GAN (Generative Adversarial Networks) can help auto-create new building designs for an architect or new jewellery designs for a designer OR passionately convinced a friend that Google Video Intelligence API or AWS Rekognition can now auto-video-screen an applicant with 90+% confidence level almost automating 75+% efforts of a recruiter, OR leverage Google Vision API to do e-KYC? If this is what you constantly get blamed for & your friends have made a strip for you on xkcd, you are probably the right-fit for what we are looking for. Work you will do... This position will be responsible to consult with the clients and propose architectural solutions to help move & improve infra from on-premise to cloud or help optimize cloud spend from one public cloud to the other. Be the first one to experiment on new age cloud offerings, help define the best practise as a thought leader for cloud, automation & Dev-Ops, be a solution visionary and technology expert across multiple channels. Continually augment skills and learn new tech as the technology and client needs evolve Use your experience in Google cloud platform, AWS or Microsoft Azure to build hybrid-cloud solutions for customers. Provide leadership to project teams, and facilitates the definition of project deliverables around core Cloud based technology and methods. Define tracking mechanisms and ensure IT standards and methodology are met; deliver quality results. Participate in technical reviews of requirements, designs, code and other artifacts Identify and keep abreast of new technical concepts in google cloud platform Education, Experience, etc. Is Education overrated? Yes. We believe so. However there is no way to locate you otherwise. So unfortunately we might have to look for Bachelor's or Master's degree in engineering from a reputed institute or you should be programming from 12. And the latter is better. We will find you faster if you specify the latter in some manner. Not just degree, but we are not too thrilled by tech certifications too ... :) To reiterate: Passion to tech-awesome, insatiable desire to learn the latest of the new-age cloud tech, highly analytical aptitude and a strong ‘desire to deliver’ outlives those fancy degrees! 5 - 10 years of experience with at least 2 - 3 years of hands-on experience in Cloud Computing (AWS/GCP/Azure) and IT operational experience in a global enterprise environment. Good analytical, communication, problem solving, and learning skills. Knowledge on programming against cloud platforms such as Google Cloud Platform and lean development methodologies. A quick self-discovery test below: How you treat yourself & others? You listen more than you speak. When you do speak, people feel the need to listen. You have ‘one’ life - no work life or personal life. You are the same at both places. You are generally happy and passionate about life. When shit does happen you know how to tell your heart ‘All is well’. You are compassionate to yourself, you love your work, your company, your country, and are generally a person people like to be around. How you work & live? You make difficult & complex decisions in an environment filled with lack of well defined constraints and uncertainty. You are able to admit to your team that you were shit scared while making those decisions. You are able to juggle conflicting priorities and remain composed as the client keeps on changing requirements. :) You are genuinely passionate about developing great software, learning a lot, helping others learn and having loads of fun while doing so. What you love? You love things. You are passionate. You care for your self, family, country and Big Bang Theory (and this is a must!). You love to organize, index, and improve things around you - Yes you are Sheldon’ish’ at times and ‘Leonard’ish’ the other times. You are passionate about improving processes and you truly feel satisfied by making things better. You love Google. And AWS. And Terraform. And CI - CD pipelines. And linux. Who we are? Searce is a Cloud, Automation & Analytics led business transformation company focussed on helping futurify businesses. We help our clients become successful by helping reimagine ‘what's next’ and then enabling them to realize that ‘now’. We processify, saasify, innovify & futurify businesses by leveraging Cloud | Analytics | Automation | BPM. What we believe? Best practices are overrated Implementing best practices can only make one ‘average’. Honesty and Transparency We believe in naked truth. We do what we tell and tell what we do. Client Partnership Client - Vendor relationship: No. We partner with clients instead. And our sales team comprises of 100% of our clients. How we work? It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER. Humble: Happy people don’t carry ego around. We listen to understand; not to respond. Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about. Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it. Passionate: We are as passionate people. Passion is what drives us to work and makes us deliver the quality we deliver. Innovative: Innovate or Die. We love to challenge the status quo. We encourage curiosity & making mistakes. Purple elephants are awesome. Excellence: No fun doing things just for the sake of doing it. It doesn’t make the doer happy, nor does it make anybody else go ‘wow’. We strongly believe that mediocrity is boring. So aim for the excellence or skip doing it altogether. Responsible: Driven. Self motivated. Self governing. Nothing here is not my job. Link - https://docs.google.com/document/d/1LTcER8X_dQVw8tTRzdhMYvtH_0GUPXllvTtq1aUF_vY/edit
Haptik specializes in building chatbots and has been doing it since 2013, way before chatbots were cool. We are looking for an Integration Engineer to help build bots for various different big and small clients. - As an integration engineer, you will need to have in-depth knowledge of backend systems, design principles, good coding practices and a thorough understanding of how to build complex bots using Haptik tools. - This role is ideal for someone with good communication skills and an interest in using using technologies to build great chatbots. - If you are looking for a challenging opportunity at a company that is building the next phase of technology apply now!Responsibilities :- Build chatbots using Haptik tools- Work with clients to understand requirements and implementation of APIs- Collaborate with the product team to build chatbots with great user experience- Integrate and develop various different APIs and system to build logic flows for chatbots- Work with various software development tools- Model database architecture for performance and scalability- Develop appropriate web-based APIs for related systems- Use and promote standard good coding practices- Participate in code reviews, automated and functional testing, and other aspects of our quality assurance process- Fully participate in a scrum-based, agile development team- Maintain up-to-date knowledge of technology standards, industry trends, emerging technologies, and software development best practicesRequirements :- 1-2 years of experience developing scalable products- Good knowledge of Python and Django framework- Good communication skills- Must have modelled database schemas for large-scale applications (MySQL, Mongo etc)- Must have a Bachelor's or higher degree in engineering- Must be a hustler- Must be self-motivatedFor more information about the company, you can visit our website www.haptik.ai.
Job Description: Are you a fun loving and passionate about technology? Are you ready to build a career that enhance your skills? Then, we are the right place to start your career in the field of Information Technology. We are totally committed about our employees, our customers, our work culture and especially our technology. The DevOps team plays a vital role in enabling SmartDocs to provide customers with our world class technology. They are the adhesive that binds the SmartDocs products together. We are an organization who encourages new ideas of employees through collaboration and creativity. We are seeking for an inventive DevOps engineer to join SmartDocs infrastructure team. The candidate will work with IT teams to provide solutions and be a subject matter expert in DevOps best practices, an ideal eligible candidate should have minimum 2 – 4 years of hands on experience in Microsoft Azure Cloud platform. Desirable Skills & Expertise: • Bachelor’s degree in computer science, software engineering, Information technology or an equivalent combination of relevant education. Preferable from the background of B.E. / M.E. (CSE / IT / Software Engineering) and MCA with an aggregate of minimum 65%. • Min. 2 years of hands on experience in the following tools & technologies: Cloud Platform: Microsoft Azure (Mandatory) Code repository: GitHub Configuration Management tools: Chef & Puppet Deployment & Other tolls: Jenkins & Docker Scripting Languages: Shell & Python Data Base: Mongo DB & Elastic search • Strong Knowledge with scripting and automated process management via shell scripting. • Excited by the opportunity to build highly scalable and world class products. • Eager and willing to work in a fast-paced environment. • Possesses intellectual humility: smart, driven, creative and able to learn things from slipups, willing to raise others up. • Someone who has a muzzle for business concerns and can drive them with technical goals. Key Responsibilities: As a part of the DevOps team you will be responsible for configuration, optimization, documentation and support of the infrastructure components of SmartDocs software products. Diagnose and develop root cause solutions for failures and performance issues in our production environment. Evaluate new tools, technologies, and processes to improve speed, efficiency, and scalability of SmartDocs continuous integration environments. Monitor and Support deployment, Cloud based and On-premises Infrastructure. Handling of production infrastructure & provide support for systems. Assist in planning and reviewing application architecture and design to promote efficient deployment process. Troubleshoot server performance issues & handling of continuous integration system. Process automation using Shell Scripting & Python. System and application installation, configuration & maintenance. Collaborate with Technology Services team to manage server space, configuration and security. Perform all other duties as assigned related to Servers, Systems & networks. Skills and Specifications: Min. 2 – 4 years of hands on experience in Cloud Microsoft Azure & other supporting tools. Strong attitude for forward thinking of complex processes to eliminate manual processing. Must be able to handle multiple tasks and adapt to a constantly changing environment. Must have good understanding of SDLC. Knowledge in Linux, Windows server, Monitoring tools and Shell scripting. Knowledge with implementation of Java applications is added advantage. Able to support clients in a fast-paced environment as and when required. Ability to use a wide variety of open source technologies and cloud services. Knowledge of best practices and IT operations in an always-up, always-available service. Self-motivated; demonstrating the ability to achieve in technologies with minimal supervision. Must possess strong communication and presentation skills and be able to communicate in English professionally over the phone, in person & written. Comfortable in the dynamic atmosphere of a technical organization with a rapidly expanding customer base across European & Asian countries. Organized, flexible and analytical able to solve problems creatively. SmartDocs, provide equal opportunities to all its employees and all qualified applicants for employment without regard to their race, caste, religion, color, ancestry, marital status, gender, sexual orientation, age, nationality, ethnic origin or disability. Our resource policies shall promote diversity and equality in the workplace while encouraging the adoption of international best practices.
Plivo is amongst the leading Service Providers in the CPaaS market, which is estimated to grow to a whopping 8 billion dollars by 2019. Plivo started in 2011 and has been backed by investors as Andreessen Horowitz who are also the early stages investors in the companies such as Facebook, Google and AirBnB. Plivo is also part of YCombinator a most sought after incubation in the Valley and is now profitable as well. Plivo has a team of about 200 members spread between its US & India offices. Plivo has 1000's of businesses from around the globe who trust us with their Voice and Messaging needs including helping them manage their Customer interactions. We are looking for someone who is excited to grow with us and be part of a company that is disrupting a multibillion dollar telecom space.We are dedicated to simplify and disrupt the multi billion telecom market. Our cloud-powered Voice and SMS APIs allow businesses to build communication applications that are scalable, low cost, and global. Thousands of well known businesses are already built using Plivo including popular conferencing solutions, mobile communication apps, SMS marketing software, and business phone solutions and this is just the beginning.We are looking for a talented and a driven product engineer to join our team.What Stack we use :Golang, Django, Python, Flask, Redis, Postgres, Celery, Nginx, Kamailio, FreeSWITCH, SIP, React, WebRTC, Linux, Android, iOS.What Technologies We work upon :Networking, Distributed systems, Big Data, Least Cost Routing, Billing, Invoicing, Analytics, Fraud detection & Prevention, VOIP protocols, SMS Protocols, Cloud Infra, Web and Mobile Platforms, MicroservicesRoles :- Own and implement features used by large customers like Truecaller, Mozilla, Zomato, Netflix, etc.- Performance, Security and Usability goals are in DNA.- Full ownership and accountability of microservices which includes day-to-day operations and maintenance.- Business and technical Metric definitions and reviews- Drive CI & CDResponsibilities :- Evaluate technologies and development stacks for API based platform which scales to 100,000 transactions per second.- Perform push-button deployments of any version of the software to any environment on demand- Own end to end life cycle of the product from requirement analysis, design, development, test, release and maintenance.- Develop reusable tools/libraries- Identify opportunities for automation and collaboration points.- Continuously improve cycle time, throughput, and code quality.- Continuously improve value-adding-activities/non-value-adding activities ratio.Skills and Requirements :- Must have worked on any one or more: Golang, Django, Flask, Redis, memcache, Postgres, Celery, DynamoDB, Nginx, Linux, Git, AWS, Docker- Proficient in at-least one OO language: Python/Golang(preferred)- Writing high-performance, reliable and maintainable code.- Excellent microservices pattern understanding.- Ability to define cross core contracts and bring them to closure through collaboration.- Good knowledge of database structures, theories, principles, and practices.- Experience working with AWS components [EC2, S3, SNS, SQS]- Analytical and problem-solving skills- Good aptitude in multi-threading and concurrency concepts.- Working knowledge of Git and proficiency with at least one build server: Jenkins / Travis / Bamboo- Experience in Telecom domain is a plus- Experience with AWS & API's is a plus.Job perks :- Informal work style, startup culture with flexible work hours- Endless snacks and beverages- Free gym membership- Competitive salary and medical benefits.
TechVerito is an IT Services organisation started in 2016. We offer services and solutions clustered around our key competences: Continuous Delivery & DevOps, Agile Consulting & Training, BigData, Enterprise Web Apps, Enterprise Mobile and Agile Testing Services. TechVerito has the experience to deliver carefully crafted software. The team has an excellent track record in delivering full-stack solutions in a distributed environment using technologies like Ruby, Ruby on Rails, Java, Golang and NodeJS. At TechVerito, you will get to work in a dynamic, collaborative, non-hierarchical environment where your talent is valued and build software using the latest technologies and tools
We, Dotball, are developing a comprehensive fan engagement platform for cricket. We launched our fantasy gaming platform @ dotball.com . Cricket is the common language spoken by its 1 billion fans and we’re unifying all of them. We’re on a hiring spree for designers, developers and marketeers. We’re working out of a kick-ass, hi-tech apartment in central Bangalore, off Cunningham Road. All our designers and developers are provided with a Mac Book Pro MPXQ2HN/A. If this isn’t enough to catch your eye, we provide accommodation for a month *, so that you can peacefully do house hunting. And of course, lunch is on us. If working in style is your cup of tea, come join us. Skills required: Knowledge of Node.js and frameworks such as Express, StrongLoop, etc Understanding the nature of asynchronous programming and its quirks and workarounds Good understanding of server-side templating languages like Jade, EJS, etc Experience with micro-services or highly scalable infrastructure Experience working with Docker, Redis, MySQL, MongoDB Responsibilities: Integration of user-facing elements developed by front-end developers with server-side logic Building reusable code and libraries for future use Optimization of the application for maximum speed and scalability Implementation of security and data protection Design and implementation of data storage solutions Test coverage for the written code Troubleshoot and debug applications Job Perks Mac Book Pro MPXQ2HN/A will be provided for work purposes. Flexible work-hour policy Lunch is provided on all working days Short-term accommodation is offered for candidates re-locating to Bangalore.
App Dynamic Certified, docker container, Deployment testing, UNIX and LINUX, Database Testing, java, selenium, jenkins, test NG
We are looking for a smart, self-driven developer with the ability to solve hard problems. Our current stack is Spring Boot, Reactjs, Postgres, with deployments on AWS, but we're planning on experiments with different stacks for different services in the future. We expect you to act like a tech-lead, and build a team around you. You will also be instrumental in deciding the future direction of the product. We need experts who can help us build modular, scalable applications. At Interleap, we're building tech courses for corporates on refactoring, clean coding, android, devops with self-evaluating assignments and other interventions that make courses interactive and engaging.
About Company:- Tracxn is a Bangalore based product company providing a research and deal sourcing platform for Venture Capital, Private Equity, CorpDevs & professionals working around the startup ecosystem. We are a team of 750+ working professionals serving customers across the globe. Our clients include Funds like Andreessen Horowitz, Sequoia Capital, Accel Partners, NEA; and Large Corporates such as ING, Societe Generale, LG and Royal Bank of Canada. We are backed by prominent investors like Ratan Tata, Nandan Nilekani, and SAIF Partners Founders:- - Neha Singh (ex-Sequoia, BCG | MBA - Stanford GSB) - Abhishek Goyal (ex-Accel Partners, Amazon | BTech - IIT Kanpur) Roles and Responsibilities:- - Design, Develop and Deliver products and frameworks (like Queuing System, Schedulers, etc) that will be used across all the Engineering teams in Tracxn. - Evaluating, Benchmarking and rolling out platform components like API Gateway, Load Balancers etc. - Driving centralized solutions like Service Discovery, Rate limiting etc for teams across Tracxn. - Extend or develop frameworks on top of docker to solve Tracxn's needs for scaling. - Working with Application Development teams to refactor the apps or build new modules to help onboard new architectures. Skills and Experience: - Must have experience in building fault-tolerant and scalable infrastructure. - Must have good conceptual, architectural & design skills. - Must have experience in at least one of the programming languages such as Java, C#, Python, Shell Script, Bash Script. - Must have experience in any one of the following databases like RDBMS, NoSQL databases. - Must have a deep understanding of how the software works at the systems level, familiarity with low-level aspects of performance, multi-threading, performance analysis, and optimization. - Good to have experience in working with Cloud Platforms like AWS, Google Cloud etc. - Good to have experience in Docker, Kubernetes, Ansible, Chef, Puppet. - Good to have experience in frameworks such as Kafka, HAproxy, Nginx. - Team management experience will be an added bonus. What we have to offer? - Work with a performance oriented team driven by ownership. - Learn to design system for high accuracy, efficiency, and scalability. - Focus on delivering quality work rather than deadlines. - Meritocracy driven, candid culture. - Very high visibility regarding startups ecosystem. Above all, you love to build and ship products that Customers will use every day.
We're looking for Senior Backend Engineer (2+ years experience) for our company - Spotmentor Technologies. Right now our Technology team has 4 members and this role is for early team member and carries significant ESOPs with it. We need someone who can lead the data science function with both vision and hands-on work and is excited to use this area to develop B2B products for enterprise productivity.RESPONSIBILITIES----------------------- • Collaborate with cross-functional team members to develop software libraries, tools, and methodologies as critical components of our computation platforms. • Use independent judgment to take existing code, understand its function, and change/enhance as needed. • Work as a team leader rather than a member.REQUIREMENTS--------------------• Proficient in Python with sound knowledge in Django/Flask.• Experience in building modular and efficient applications which can run on the scale.• Proficient in writing database queries (NoSQL preferably).• Basic knowledge of working with containers (Docker).• Follow best practices while writing code - PEP8, TDD, SOA etc.• Full understanding of VCS (mainly GIT).• Strong problem-solving skills and analytical thinking.
Job Description: Develop and deliver automation software required for building & improving the functionality, reliability, availability, and manageability of applications and cloud platforms Champion and drive the adoption of Infrastructure as Code (IaC) practices and mindset Design, architect, and build self-service, self-healing, synthetic monitoring and alerting platform and tools Automate the development and test automation processes through CI/CD pipeline (Git, Jenkins, SonarQube, Artifactory, Docker containers) Build container hosting-platform using Kubernetes Introduce new cloud technologies, tools & processes to keep innovating in commerce area to drive greater business value. Must Haves: Proficiency in deploying and maintaining Cloud-based infrastructure services (AWS, GCP, Azure – good hands-on experience in at least one of them) Well versed with service-oriented architecture, cloud-based web services architecture, design patterns and frameworks. Good knowledge of cloud related services like compute, storage, network, messaging (Eg SNS, SQS) and automation (Eg. CFT/Terraform). Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience in systems management/automation tools (Puppet/Chef/Ansible, Terraform) Strong Linux System Admin Experience with excellent troubleshooting and problem-solving skills Hands-on experience with languages (Bash/Python/Core Java/Scala) Experience with CI/CD pipeline (Jenkins, Git, Maven etc) Experience integrating solutions in a multi-region environment Self-motivated, learn quickly and deliver results with minimal supervision Experience with Agile/Scrum/DevOps software development methodologies. Nice to have: Experience in setting-up Elastic Logstash Kibana(ELK) stack. Having worked with large-scale data. Experience with Monitoring tools such as Splunk, Nagios, Grafana, DataDog etc Previously experience on working with distributed architectures like Hadoop, Map-reduce etc.
Job Description: * Part of a Cloud product team responsible for defining and development of Cloud operations automation orchestration and optimization use cases. * Be at the forefront of Cloud technology, assisting a global list of customers that consume multiple cloud environments. * Explore and implement a broad spectrum of open source technologies. * Help the team/customer to resolve technical issues. * Work closely with development teams for CI/CD pipelines and with QA team for test automation. * Extremely customer focused, flexible to be available on-call for solving critical problems. * Contribute towards the process improvement involving the Product deployments, Cloud Governance & Customer Success. Desired Candidate Profile: * Minimum 2+ Years of experience with a B.E/B.Tech * Well versed in DevOps technologies, automation, infrastructure orchestration, configuration management and continuous integration * Prior work experience with Cloud domain and cloud-based products is a must. (AWS, Azure) Experience in Linux Administration, server hardening and security compliance * Web and Application Server technologies (e.g. Apache, Nginx, IIS) DevOps, Orchestration/Configuration Management and Continuous Integration technology (e.g. Chef, Puppet, Docker, Jenkins, Ansible etc.) * Good command in at least one scripting language (e.g. Bash, PowerShell, Ruby, Python) * Networking protocols such as HTTP, DNS and TCP/IP * Experiencing in managing version control platforms (e.g. Git, SVN, TFVC)
Site Reliability EngineerJob Description :You will be administering the infrastructure of an indigenous one-of-a-kind artificial intelligence cloud platform. You will be working with the dev teams to deploy, monitor and scale the distributed platform to handle real time AI analysis and loads and loads of visual data (images and videos in various formats). We're looking for people with extensive dev-ops experience and strong system programming skills.Responsibilities :1. You will be responsible for the up time and reliability of infrastructure of SigTuple and help backend teams achieve it by writing reliable software and automation2. Work with other development teams to automate deployment of modules and manage the build and release pipeline.3. Extensive process-level and node-level monitoring and auto healing of entire cluster.4. Managing, provisioning and servicing cloud servers.5. Contribution to back-end services to contribute to its infrastructure system design.Requirements :1. BTech/MTech in any engineering discipline.2. 3-6 years of experience in an Dev-Ops/Software Engineering role.3. Experience in management of cloud computing services. Extensive knowledge of any one cloud platform (Kubernetes, AWS, GCP, Azure etc.)4. Proficiency with any major monitoring framework (Sensu, Nagios etc.).5. Comfortable with any one scripting language (Python, Perl) and a Configuration management or Orchestration Tool (Ansible, Chef etc)6. Proficiency with OS and network fundamentals and strong Linux administrator skills.7. Experience with Container Tools (Docker ecosystem) will be a plus8. Experience of working with issues of scale of a system.9. Experience of working in a startup is a plus.
Hi , We are Civil Maps . Civil Maps, provider of 3D mapping technology for fully autonomous vehicles, has raised a $6.6 million seed funding round from Ford Motor Company, Motus Ventures, Wicklow Capital, StartX Stanford and Yahoo cofounder Jerry Yang’s AME Cloud Ventures. Civil Maps’ mission is to make it possible for fully autonomous vehicles (SAE Levels 4-5) to drive anywhere smoothly and safely. Through artificial intelligence and vehicle-based local processing, Civil Maps converts sensor data into meaningful map information built specifically to direct fully autonomous vehicles. The company will use the seed investment to accelerate product development and deployment with a number of leading automotive companies and technology partners. Civil Maps’ artificial intelligence software aggregates raw 3D data from LiDAR (high-resolution laser imaging), camera and other sensors onboard autonomous vehicles and organizes the information into machine-readable maps. The information is vastly more actionable information than today’s mapping systems and requires a fraction of the data storage and transmission for existing technologies. Thanks to this light data footprint, Civil Maps’ spatial information is far less costly to transmit over cellular networks, enabling the company to more easily crowdsource, update and share road data in real time—a major improvement over the lengthy processes that require human annotation in current use. As a result, the company can quickly generate and maintain maps that give fully autonomous vehicles the comprehensive and precise knowledge to autonomous operate safely and smoothly on all roads. About the Opportunity : This is an opportunity to work on some of the most exciting problems in the autonomous vehicle industry. Civil Maps is looking for a Devops engineer to grow its overall infrastructure. You will be interacting with a completely custom-made distributed computing framework, handling terabytes and petabytes of machine vision data, and millions of requests from robotic platforms. In our interdisciplinary teams, you will get acquainted with the many facets of 3D map creation: data collection, geospatial registration, feature extraction, auditing. Come work on novel challenges, learn and grow while helping Civil Maps achieve on-demand perception of the world around us. We are hiring the professionals for the below skills: Work Location : Hyderabad Job Description for DevOps Enginner: Experience : 3-15 yrs Responsibilities : Deliver next generation distributed applications using a variety of tools to include Kubernetes, Ansible, RHEL, Openshift, Atomic, Mesos, Openstack and Docker. Build solutions and provide a framework for Microservices and DevOps. Bring knowledge and help with continuous integration, deployment and security. Automating the Build & Deployment environment with relative tools. Troubleshooting Build & Deployment issues. Experience with the following : Deploy, manage Continuous Integration tools (Jenkins, Travis etc..). Cloud-based deployments (AWS, Azure, OpenStack). Linux administration, usage, and bash scripting (Bash, Batch, Powershell) Deploying MEAN stack applications. Security tools like Fortify, Nessus. Network architecture (routing, load balancing, firewalls, VLANs). Virtualization (VMWare, XEN, KVM, Openstack). Container technologies (Mesos, Docker, Rocket, etc.). Experience in setting up backup and restore mechanism for the DB and other relevant systems.
Required Skills: Strong experience in AWS / Google Cloud. Strong development experience in Perl, Python, Docker, and Postgres. Strong experience in build/release management. Working experience on Linux. Excellent knowledge of shell scripts. Knowledge of Virtualization Platforms VMware. Working experience on Configuration Management tools. Working experience on Test and Build Systems Jenkins/Maven Should have strong communication skills, a passion to learn, and an ability to work well with people at all levels of an organization. Roles and Responsibilities: Create Deployment Unit consisting build, documents and installation artifacts. Preparing Delivery definition / Release Note / Production turn-over Note documents. Establish DevOps Policies. Communicate with developers, product managers and technical support specialists on product issues. Assist in Creating and maintaining Configuration and Change Management Plan for the project. Choosing suitable DevOps tools. Setting up Configuration Management Environment. Assist in routine back-up and archival of project repository.
We are looking for a Linux Administrator who is responsible for designing, implementing and monitoring the Server Infrastructure. He will also be collaborating with remote teams on deployment strategies and processes. Responsibilities Help tune performance and ensure high availability of Infrastructure Design and develop infrastructure monitoring and reporting tools Develop and maintain configuration management solutions Skills: Experience with Linux Servers & Virtualisation Experience in installing, configuring and maintaining services such as Nginx, Apache, Percona etc. Familiarity with load balancer, HAProxy Experience in MySQL, Percona, MongoDB administration. Experience in performance optimization DB Backups, Backup maintenance Knowledge on deployment of frameworks like nodejs projects using Jenkins Experience in RabbitMQ setup, maintenance, administration Working experience in Docker setup Ability to build and monitor servers on production
Our company is working on some really interesting projects in Big Data Domain in various fields (Utility, Retail, Finance). We are working with some big corporates and MNCs around the world. While working here as Big Data Engineer, you will be dealing with big data in structured and unstructured form and as well as streaming data from Industrial IOT infrastructure. You will be working on cutting edge technologies and exploring many others while also contributing back to the open-source community. You will get to know and work on end-to-end processing pipeline which deals with all type of work like storing, processing, machine learning, visualization etc.
Building Mobile & Saas products for Skill Training institutes focused at the bottom of the pyramid and connecting them to jobs. Looking for someone who is passionate and driven, and wants make a difference solving hard problems.
Will be part of product development team and will be supporting the team with design and implementation of several features for distributed docker deployments. - 2+ Years of experience in using, deploying Docker, LXC or Rkt containers. - Programming experience in C, C++, GoLang. Engineers with other programming language experience with willingness to learn GoLang are also welcome. - Experience in the following areas is added advantage REST interface implementation, OAUTH, ProtoBuff, RabbitMQ etc.
Build, deploy and release automation engineer. Will be part of product development team and will be supporting the team with formulating different workflows for build, deploy and release. Product uses new age technologies such as GoLang, AngularJS, Docker etc. - 5 years of experience managing SCM & build environments. - Experience with GIT tooling and scripting. - Programming/scripting experience in Python, Bash etc - Branching, Versioning management and process automation. - Experience with Docker image management as part of product build and release. - Deployment on Cloud, on virtual machines. - Experience with service deployment on Kubernetes, Mesos etc will be added advantage.