Ask any CIO about corporate data and they’ll happily share all the work they’ve done to make their databases secure and compliant. Ask them about other sensitive information, like contracts, financial documents, and source code, and you’ll probably get a much less confident response. Few organizations have any insight into business-critical information stored in unstructured data.
There was a time when that didn’t matter. Those days are gone. Data is now accessible, copious, and dispersed, and it includes an alarming amount of business-critical information. It’s a target for both cybercriminals and regulators but securing it is incredibly difficult. It’s the data challenge of our generation.
Existing approaches aren’t doing the job. Keyword searches produce a bewildering array of possibly relevant documents that may or may not be business critical. Asking users to categorize documents requires extensive training and constant vigilance to make sure users are doing their part. What’s needed is an autonomous solution that can find and assess risk so you can secure your unstructured data wherever it lives.
That’s our mission. Concentric’s semantic intelligence solution reveals the meaning in your structured and unstructured data so you can fight off data loss and meet compliance and privacy mandates.
Check out our core cultural values and behavioural tenets here: https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/
Title: Cloud DevOps Engineer
Role: Individual Contributor (4-8 yrs)
Requirements:
- Energetic self-starter, a fast learner, with a desire to work in a startup environment
- Experience working with Public Clouds like AWS
- Operating and Monitoring cloud infrastructure on AWS.
- Primary focus on building, implementing and managing operational support
- Design, Develop and Troubleshoot Automation scripts (Configuration/Infrastructure as code or others) for Managing Infrastructure.
- Expert at one of the scripting languages – Python, shell, etc
- Experience with Nginx/HAProxy, ELK Stack, Ansible, Terraform, Prometheus-Grafana stack, etc
- Handling load monitoring, capacity planning, and services monitoring.
- Proven experience With CICD Pipelines and Handling Database Upgrade Related Issues.
- Good Understanding and experience in working with Containerized environments like Kubernetes and Datastores like Cassandra, Elasticsearch, MongoDB, etc
About Concentric AI
About us:
Ask any CIO about corporate data and they’ll happily share all the work they’ve done to make their databases secure and compliant. Ask them about other sensitive information, like contracts, financial documents, and source code, and you’ll probably get a much less confident response. Few organizations have any insight into business-critical information stored in unstructured data.
There was a time when that didn’t matter. Those days are gone. Data is now accessible, copious, and dispersed, and it includes an alarming amount of business-critical information. It’s a target for both cybercriminals and regulators but securing it is incredibly difficult. It’s the data challenge of our generation.
Existing approaches aren’t doing the job. Keyword searches produce a bewildering array of possibly relevant documents that may or may not be business critical. Asking users to categorize documents requires extensive training and constant vigilance to make sure users are doing their part. What’s needed is an autonomous solution that can find and assess risk so you can secure your unstructured data wherever it lives.
That’s our mission. Concentric’s semantic intelligence solution reveals the meaning in your structured and unstructured data so you can fight off data loss and meet compliance and privacy mandates.
Check out our core cultural values and behavioural tenets here: https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/
Concentric AI is a US-based Series-A-funded startup. It has engineering offices in Bangalore, San Jose, and Pune.
We are in the data security domain and use AI and machine learning to solve customer use cases. Our customers, large and mid-size enterprise customers are across verticals.
Using Semantic Intelligence, we address data security without rules or regex or end-user involvement. Our product helps gain visibility into the who, where and how of an enterprise customer's sensitive data. A customer can automatically remediate and minimize data risk with Concentric AI.
You can go through details of Concentric product on our website - www.concentric.ai
Similar jobs
• Bachelor’s or master’s degree in Computer Engineering,
Computer Science, Computer Applications, Mathematics, Statistics or related technical field or
equivalent practical experience. Relevant experience of at least 3 years in lieu of above if from a
different stream of education.
• Well-versed in DevOps principals & practices and hands-on DevOps
tool-chain integration experience: Release Orchestration & Automation, Source Code & Build
Management, Code Quality & Security Management, Behavior Driven Development, Test Driven
Development, Continuous Integration, Continuous Delivery, Continuous Deployment, and
Operational Monitoring & Management; extra points if you can demonstrate your knowledge with
working examples.
• Hands-on experience with demonstrable working experience with DevOps tools
and platforms viz., Slack, Jira, GIT, Jenkins, Code Quality & Security Plugins, Maven, Artifactory,
Terraform, Ansible/Chef/Puppet, Spinnaker, Tekton, StackStorm, Prometheus, Grafana, ELK,
PagerDuty, VictorOps, etc.
• Well-versed in Virtualization & Containerization; must demonstrate
experience in technologies such as Kubernetes, Istio, Docker, OpenShift, Anthos, Oracle VirtualBox,
Vagrant, etc.
• Well-versed in AWS and/or Azure or and/or Google Cloud; must demonstrate
experience in at least FIVE (5) services offered under AWS and/or Azure or and/or Google Cloud in
any categories: Compute or Storage, Database, Networking & Content Delivery, Management &
Governance, Analytics, Security, Identity, & Compliance (or) equivalent demonstratable Cloud
Platform experience.
• Well-versed with demonstrable working experience with API Management,
API Gateway, Service Mesh, Identity & Access Management, Data Protection & Encryption, tools &
platforms.
• Hands-on programming experience in either core Java and/or Python and/or JavaScript
and/or Scala; freshers passing out of college or lateral movers into IT must be able to code in
languages they have studied.
• Well-versed with Storage, Networks and Storage Networking basics
which will enable you to work in a Cloud environment.
• Well-versed with Network, Data, and
Application Security basics which will enable you to work in a Cloud as well as Business
Applications / API services environment.
• Extra points if you are certified in AWS and/or Azure
and/or Google Cloud.
Type, Location
Full Time @ Anywhere in India
Desired Experience
2+ years
Job Description
What You’ll Do
● Deploy, automate and maintain web-scale infrastructure with leading public cloud vendors such as Amazon Web Services, Digital Ocean & Google Cloud Platform.
● Take charge of DevOps activities for CI/CD with the latest tech stacks.
● Acquire industry-recognized, professional cloud certifications (AWS/Google) in the capacity of developer or architect Devise multi-region technical solutions.
● Implementing the DevOps philosophy and strategy across different domains in organisation.
● Build automation at various levels, including code deployment to streamline release process
● Will be responsible for architecture of cloud services
● 24*7 monitoring of the infrastructure
● Use programming/scripting in your day-to-day work
● Have shell experience - for example Powershell on Windows, or BASH on *nix
● Use a Version Control System, preferably git
● Hands on at least one CLI/SDK/API of at least one public cloud ( GCP, AWS, DO)
● Scalability, HA and troubleshooting of web-scale applications.
● Infrastructure-As-Code tools like Terraform, CloudFormation
● CI/CD systems such as Jenkins, CircleCI
● Container technologies such as Docker, Kubernetes, OpenShift
● Monitoring and alerting systems: e.g. NewRelic, AWS CloudWatch, Google StackDriver, Graphite, Nagios/ICINGA
What you bring to the table
● Hands on experience in Cloud compute services, Cloud Function, Networking, Load balancing, Autoscaling.
● Hands on with GCP/AWS Compute & Networking services i.e. Compute Engine, App Engine, Kubernetes Engine, Cloud Function, Networking (VPC, Firewall, Load Balancer), Cloud SQL, Datastore.
● DBs: Postgresql, MySQL, Elastic Search, Redis, kafka, MongoDB or other NoSQL systems
● Configuration management tools such as Ansible/Chef/Puppet
Bonus if you have…
● Basic understanding of Networking(routing, switching, dns) and Storage
● Basic understanding of Protocol such as UDP/TCP
● Basic understanding of Cloud computing
● Basic understanding of Cloud computing models like SaaS, PaaS
● Basic understanding of git or any other source code repo
● Basic understanding of Databases(sql/no sql)
● Great problem solving skills
● Good in communication
● Adaptive to learning
Hiring for a funded fintech startup based out of Bangalore!!!
Our Ideal Candidate
We are looking for a Senior DevOps engineer to join the engineering team and help us automate the build, release, packaging and infrastructure provisioning and support processes. The candidate is expected to own the full life-cycle of provisioning, configuration management, monitoring, maintenance and support for cloud as well as on-premise deployments.
Requirements
- 5-plus years of DevOps experience managing the Big Data application stack including HDFS, YARN, Spark, Hive and Hbase
- Deeper understanding of all the configurations required for installing and maintaining the infrastructure in the long run
- Experience setting up high availability, configuring resource allocation, setting up capacity schedulers, handling data recovery tasks
- Experience with middle-layer technologies including web servers (httpd, ningx), application servers (Jboss, Tomcat) and database systems (postgres, mysql)
- Experience setting up enterprise security solutions including setting up active directories, firewalls, SSL certificates, Kerberos KDC servers, etc.
- Experience maintaining and hardening the infrastructure by regularly applying required security packages and patches
- Experience supporting on-premise solutions as well as on AWS cloud
- Experience working with and supporting Spark-based applications on YARN
- Experience with one or more automation tools such as Ansible, Teraform, etc
- Experience working with CI/CD tools like Jenkins and various test report and coverage plugins
- Experience defining and automating the build, versioning and release processes for complex enterprise products
- Experience supporting clients remotely and on-site
- Experience working with and supporting Java- and Python-based tech stacks would be a plus
Desired Non-technical Requirements
- Very strong communication skills both written and verbal
- Strong desire to work with start-ups
- Must be a team player
Job Perks
- Attractive variable compensation package
- Flexible working hours – everything is results-oriented
- Opportunity to work with an award-winning organization in the hottest space in tech – artificial intelligence and advanced machine learning
Task:
- Need to run our software products in different international environments (on premise and cloud providers)
- Support the developers while debugging issues
- Analyse and monitor software during runtime to find bugs, performance issues and plan growth of the system
- Integrate new technologies to support our products while growing in the market
- Develop Continuous Integration and Continuous Deployment Pipelines
- Maintain our on premise hosted servers and applications, like operating system upgrades, software upgrades, introducing new database versions etc.
- Automation of task to reduce amount of human errors and parallelize work
We wish:
- Basic OS knowledge (Debian, CentOS, Suse Enterprise Linux)
- Webserver administration and optimization (Apache, Traefik)
- Database administration and optimization (Mysql/MariaDB, Oracle, Elasticsearch)
- jvm administration and optimization application server administration and optimization (Servicemix, Karaf, Glassfish, Springboot)
- Scripting experience (Perl, Python, PHP, Java)
- Monitoring experience (Icinga/Nagios, Appdynamics, Prometheus, Grafana)
- Knowledge container management (Docker/ContainerD, DC/OS, Kubernetes)
- Experience with automatic deployment processes (Ansible, Gitlab-CI, Helm)
- Define and optimize processes for system maintenance, continuous integration, and continuous delivery
- Excellent communication skill & proficiency in English is necessary
- Leadership skill with team motivational approach
- Good Team player
We Offer:
- Freedom to realise your own ideas & individual career & development opportunity.
- A motivating work environment, flat hierarchical structure, numerous company events which cannot be forgotten and fun at work place with flexibilities.
- Professional challenges and career development opportunities.
Your Contact for this position is Janki Raval .
Would you like to become part of this highly innovative, dynamic, and exciting world?
We look forward to your expressive Resume.
Key skills: |
• Deliver and support the deployment of Red Hat Ansible Automation Platform automation for enterprises |
• Design, create, and deliver content that will enable support automation solutions at scale |
• Working experience(min 6 months) in Ansible, RESTful APIs, |
• Experience implementing a continuous integration (CI) or continuous development (CD) pipeline |
• Intermediate-level scripting skills or Python |
• Very good analytical/problem solving skills, |
• Working experience in any one virtualized platform (VMware/Red Hat/Microsoft) |
• Infrastructure(server/storage/network) management experience(desirable) |
• Relational Database concepts(desirable) |
• Understanding of cloud concepts |
Experience: |
• 3+ Years of Hands-on Red Hat Ansible Automation Platform & DevOps Experience |
Implement DevOps capabilities in cloud offerings using CI/CD toolsets and automation
Defining and setting development, test, release, update, and support processes for DevOps
operation
Troubleshooting techniques and fixing the code bugs
Coordination and communication within the team and with client team
Selecting and deploying appropriate CI/CD tools
Strive for continuous improvement and build continuous integration, continuous
development, and constant deployment pipeline (CI/CD Pipeline)
Pre-requisite skills required:
Experience working on Linux based infrastructure
Experience of scripting in at-least 2 languages ( Bash + Python / Ruby )
Working knowledge of various tools, open-source technologies, and cloud services
Experience with Docker, AWS ( ec2, s3, iam, eks, route53), Ansible, Helm, Terraform
Experience with building, maintaining, and deploying Kubernetes environments and
applications
Experience with build and release automation and dependency management; implementing
CI/CD
Clear fundamentals with DNS, HTTP, HTTPS, Micro-Services, Monolith etc.
- Provide consultation and review all outgoing critical customer communications.
- Apply DevOps thinking in bringing the development and IT Ops process, people, and tools together within the company in order to increase the speed, efficiency, and quality.
- Perform architecture and security reviews for different projects, work with leads to develop strategy and roadmap for the client requirements. Involve in designing of the overall architecture of the system with another leads/architect.
- Develop and grow engineers in DevOps technology to meet the incoming requirements from the business team.
- Work with senior technical team to bring in new technologies/tools being used within the company. Develop and promote best practices and emerging concepts for DevSecOps and secure CI/CD. Participate in Solution Strategy, innovation areas, and technology roadmap.
Key Skills:
- Deals positively with high levels of uncertainty, ambiguity, and shifting priorities.
- Ability to influence stakeholders as a trusted advisor across all levels, including teams outside of shared services.
- Ability to think outside of the box and be innovative by keeping abreast of new trends, identifying opportunities to bring in change for business benefit.
- Implementing CI (Continuous Integration) and CD (Continuous Deployment). Have Good exposure to CI & Build Management tools like Jenkins Azure DevOps GitHub Actions Maven Gradle and etc
- Deployment and provisioning tools (Chef/Ansible/Terraform/AWS CDK etc)
- Docker Orchestration tools like Kubernetes/Swarm etc
- Good hands-on knowledge of automation scripting Python Shell Ruby etc
- Version Control for Source Code Management (SCM) tool: GIT/Bitbucket and etc
- Expertise in Linux based systems like Unix Linux Ubuntu and also manage security systems Linux file system permission etc
- Container Orchestration tool: Kubernetes Swarm Meso Marathon Docker Writing Docker file Docker compose
- Expertise in managing Cloud resources and good exposure to Docker
- Public/Private/Hybrid cloud: AWS /Microsoft Azure/ Google Cloud Platform etc
- Extensive experience with cloud services elastic capacity administration and cloud deployment and migration.
- Good to have knowledge of tools like Splunk, New Relic, PagerDuty, VictorOps
- Familiarity with Network protocols and elements - TCP/IP HTTP(S) SSL DNS Firewall router load balancers proxy.
- Excellent in creating new and improve existing workflows within the agile software development lifecycle.
- Familiar with incident and change management processes.
- Ability to effectively priorities work with fast-changing requirements.
- Troubleshoot and debug infrastructure Network and operating system issues.
- Resolve complex issues in scenarios like resource consumptions server performance backup strategy Scaling.
- Investigate and perform Root Cause Analysis on users' reported issues and provide a workaround before implementing a final fix.
- Monitor servers and applications to ensure the smooth running of IT Architecture (Applications Services Schedulers Server Performance etc)
Design Skills:
- Interpret and implement the designs of others adhering to standards and guidelines
- Design solutions within their area of expertise using technologies that already exist within Tesco
- Understand the roadmaps for their area of Technology Design secure solutions
- Design solutions that can be consumed in a self-service manner by the engineering teams
- Understand the impact of technologies at an enterprise-scale innovation
- Demonstrate knowledge of the latest technology trends related to Infrastructure
- Understand how Industry trends impact their own area
- identify opportunities to automate work and deliver against them