


About Concentric AI
About
About us:
Ask any CIO about corporate data and they’ll happily share all the work they’ve done to make their databases secure and compliant. Ask them about other sensitive information, like contracts, financial documents, and source code, and you’ll probably get a much less confident response. Few organizations have any insight into business-critical information stored in unstructured data.
There was a time when that didn’t matter. Those days are gone. Data is now accessible, copious, and dispersed, and it includes an alarming amount of business-critical information. It’s a target for both cybercriminals and regulators but securing it is incredibly difficult. It’s the data challenge of our generation.
Existing approaches aren’t doing the job. Keyword searches produce a bewildering array of possibly relevant documents that may or may not be business critical. Asking users to categorize documents requires extensive training and constant vigilance to make sure users are doing their part. What’s needed is an autonomous solution that can find and assess risk so you can secure your unstructured data wherever it lives.
That’s our mission. Concentric’s semantic intelligence solution reveals the meaning in your structured and unstructured data so you can fight off data loss and meet compliance and privacy mandates.
Check out our core cultural values and behavioural tenets here: https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/
Tech stack

Candid answers by the company
Concentric AI is a US-based Series-A-funded startup. It has engineering offices in Bangalore, San Jose, and Pune.
We are in the data security domain and use AI and machine learning to solve customer use cases. Our customers, large and mid-size enterprise customers are across verticals.
Using Semantic Intelligence, we address data security without rules or regex or end-user involvement. Our product helps gain visibility into the who, where and how of an enterprise customer's sensitive data. A customer can automatically remediate and minimize data risk with Concentric AI.
You can go through details of Concentric product on our website - www.concentric.ai
Product showcase
Photos
Connect with the team
Similar jobs
Job Description:
Responsibilities
· Having E2E responsibility for Azure landscape of our customers
· Managing to code release and operational tasks within a global team with a focus on automation, maintainability, security and customer satisfaction
· Make usage of CI/CD framework to rapidly support lifecycle management of the platform
· Acting as L2-L3 support for incidents, problems and service request
· Work with various Atos and 3rd party teams to resolve incidents and implement changes
· Implement and drive automation and self-healing solutions to reduce toil
· Enhance error budgets and hands on design and development of solutions to address reliability issues and/or risks
· Support ITSM processes and collaborate with service management representatives
Job Requirements
· Azure Associate certification or equivalent knowledge level
· 5+ years of professional experience
· Experience with Terraform and/or native Azure automation
· Knowledge of CI/CD concepts and toolset (i.e. Jenkins, Azure DevOps, Git)
· Must be adaptable to work in a varied, fast paced exciting, ever changing environment
· Good analytical and problem-solving skills to resolve technical issues
· Understanding of Agile development and SCRUM concepts a plus
· Experience with Kubernetes architecture and tools a plus
● Manage AWS services and day to day cloud operations.
● Work closely with the development and QA team to make the deployment process
smooth and devise new tools and technologies in order to achieve automation of most
of the components.
● Strengthen the infrastructure in terms of Reliability (configuring HA etc.), Security (cloud
network management, VPC, etc.) and Scalability (configuring clusters, load balancers,
etc.)
● Expert level understanding of DB replication, Sharding (mySQL DB Systems), HA
clusters, Failovers and recovery mechanisms.
● Build and maintain CI-CD (continuous integration/deployment) workflows.
● Having an expert knowledge on AWS EC2, S3, RDS, Cloudfront and other AWS offered
services and products.
● Installation and management of software systems in order to support the development
team e.g. DB installation and administration, web servers, caching and other such
systems.
Requirements:
● B. Tech or Bachelor's in a related field.
● 2-5 years of hands-on experience with AWS cloud services such as EC2, ECS,
Cloudwatch, SQS, S3, CloudFront, route53.
● Experience with setting up CI-CD pipelines and successfully running large scale
systems.
● Experience with source control systems (SVN, GIT etc), Deployment and build
automation tools like Jenkins, Bamboo, Ansible etc.
● Good experience and understanding of Linux/Unix based systems and hands-on
experience working with them with respect to networking, security, administration.
● Atleast 1-2 years of experience with shell/python/perl scripting; having experience with
Bash scripting is an added advantage.
● Experience with automation tasks like, automated backups, configuring fail overs,
automating deployment related process is a must have.
● Good to have knowledge of setting up the ELK stack; Infrastructure as a code services
like Terraform; working and automating processes with AWS SDK/CLI tools with scripts
As a MLOps Engineer in QuantumBlack you will:
Develop and deploy technology that enables data scientists and data engineers to build, productionize and deploy machine learning models following best practices. Work to set the standards for SWE and
DevOps practices within multi-disciplinary delivery teams
Choose and use the right cloud services, DevOps tooling and ML tooling for the team to be able to produce high-quality code that allows your team to release to production.
Build modern, scalable, and secure CI/CD pipelines to automate development and deployment
workflows used by data scientists (ML pipelines) and data engineers (Data pipelines)
Shape and support next generation technology that enables scaling ML products and platforms. Bring
expertise in cloud to enable ML use case development, including MLOps
Our Tech Stack-
We leverage AWS, Google Cloud, Azure, Databricks, Docker, Kubernetes, Argo, Airflow, Kedro, Python,
Terraform, GitHub actions, MLFlow, Node.JS, React, Typescript amongst others in our projects
Key Skills:
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
Summary
We are building the fastest, most reliable & intelligent trading platform. That requires highly available, scalable & performant systems. And you will be playing one of the most crucial roles in making this happen.
You will be leading our efforts in designing, automating, deploying, scaling and monitoring all our core products.
Tech Facts so Far
1. 8+ services deployed on 50+ servers
2. 35K+ concurrent users on average
3. 1M+ algorithms run every min
4. 100M+ messages/min
We are a 4-member backend team with 1 Devops Engineer. Yes! this is all done by this incredible lean team.
Big Challenges for You
1. Manage 25+ services on 200+ servers
2. Achieve 99.999% (5 Nines) availability
3. Make 1-minute automated deployments possible
If you like to work on extreme scale, complexity & availability, then you will love it here.
Who are we
We are on a mission to help retail traders prosper in the stock market. In just 3 years, we have the 3rd most popular app for the stock markets in India. And we are aiming to be the de-facto trading app in the next 2 years.
We are a young, lean team of ordinary people that is building exceptional products, that solve real problems. We love to innovate, thrill customers and work with brilliant & humble humans.
Key Objectives for You
• Spearhead system & network architecture
• CI, CD & Automated Deployments
• Achieve 99.999% availability
• Ensure in-depth & real-time monitoring, alerting & analytics
• Enable faster root cause analysis with improved visibility
• Ensure a high level of security
Possible Growth Paths for You
• Be our Lead DevOps Engineer
• Be a Performance & Security Expert
Perks
• Challenges that will push you beyond your limits
• A democratic place where everyone is heard & aware
DevOps Engineer
Job Description:
The position requires a broad set of technical and interpersonal skills that includes deployment technologies, monitoring and scripting from networking to infrastructure. Well versed in troubleshooting Prod issues and should be able to drive till the RCA.
Skills:
- Manage VMs across multiple datacenters and AWS to support dev/test and production workloads.
- Strong hands-on over Ansible is preferred
- Strong knowledge and hands-on experience in Kubernetes Architecture and administration.
- Should have core knowledge in Linux and System operations.
- Proactively and reactively resolve incidents as escalated from monitoring solutions and end users.
- Conduct and automate audits for network and systems infrastructure.
- Do software deployments, per documented processes, with no impact to customers.
- Follow existing devops processes while having flexibility to create and tweak processes to gain efficiency.
- Troubleshoot connectivity problems across network, systems or applications.
- Follow security guidelines, both policy and technical to protect our customers.
- Ability to automate recurring tasks to increase velocity and quality.
- Should have worked on any one of the Database (Postgres/Mongo/Cockroach/Cassandra)
- Should have knowledge and hands-on experience in managing ELK clusters.
- Scripting Knowledge in Shell/Python is added advantage.
- Hands-on Experience over K8s based Microservice Architecture is added advantage.

Striim (pronounced “stream” with two i’s for integration and intelligence) was founded in 2012 with a simple goal of helping companies make data useful the instant it’s born.
Striim’s enterprise-grade, streaming integration with intelligence platform makes it easy to build continuous, streaming data pipelines – including change data capture (CDC) – to power real-time cloud integration, log correlation, edge processing, and streaming analytics
2 - 5 Years of Experience in any Programming any language (Polyglot Preferred ) & System Operations • Awareness of Devops & Agile Methodologies • Proficient in leveraging CI and CD tools to automate testing and deployment . • Experience in working in an agile and fast paced environment . • Hands on knowledge of at least one cloud platform (AWS / GCP / Azure). • Cloud networking knowledge: should understand VPC, NATs, and routers. • Contributions to open source is a plus. • Good written communication skills are a must. Contributions to technical blogs / whitepapers will be an added advantage.

- Working on scalability, maintainability and reliability of company's products.
- Working with clients to solve their day-to-day challenges, moving manual processes to automation.
- Keeping systems reliable and gauging the effort it takes to reach there.
- Understanding Juxtapose tools and technologies to choose x over y.
- Understanding Infrastructure as a Code and applying software design principles to it.
- Automating tedious work using your favourite scripting languages.
- Taking code from the local system to production by implementing Continuous Integration and Delivery principles.
What you need to have:
- Worked with any one of the programming languages like Go, Python, Java, Ruby.
- Work experience with public cloud providers like AWS, GCP or Azure.
- Understanding of Linux systems and Containers
- Meticulous in creating and following runbooks and checklists
- Microservices experience and use of orchestration tools like Kubernetes/Nomad.
- Understanding of Computer Networking fundamentals like TCP, UDP.
- Strong bash scripting skills.
MTX Group Inc. is seeking a motivated DevOps Engineer to join our team. MTX Group Inc is a global cloud implementation partner that enables organizations to become a fit enterprise through digital transformation and strategy. MTX is powered by the Maverick.io Artificial Intelligence platform and has a strong presence in the Public Sector providing proprietary designs and innovative concept accelerators around licensing and permitting, inspections, grants management, case management, and program management. MTX is a strategic partner with Salesforce with specialty expertise in Einstein Analytics, Mulesoft, Customer Community, Commerce Cloud, and Marketing Cloud. MTX is a Google Cloud partner helping accelerate digital transformation programs across federal, state, and local government agencies.
The DevOps role is responsible for maintaining infrastructure and both development and operational deployments in multiple cloud environments for MTX Group, Inc. and their clients. This role adheres to and promotes MTX Group, Inc’s company’s values by performing respective duties in a manner that supports and contributes to the achievement of MTX Group, Inc’s company’s goals.
Responsibilities:
- Develop and manage tools and services to be used by the organization and by external users of the platform
- Automate all operational and repetitive tasks to improve efficiency and productivity of all development teams
- Research and propose new solutions to improve the the mavQ platform in aspects of speed, scalability and security
- Automate and manage the cloud infrastructure of the organization distribute across the globe and across multiple cloud providers such as Google Cloud and AWS
- Ensure thorough logging, monitoring and alerting for all services and code running in the organization
- Work with development teams to communications and protocols for distributes microservices
- Help development teams debug devops related issues
- Manage CI/CD, Source Control and IAM for the organization
What you will bring:
- Bachelor’s Degree or equivalent
- 4+ years of experience as a DevOps Engineer OR
- 2+ years of experience as backend developer and 2+ years of experience as DevOps or Systems engineer
- Hands on experience with Docker and Kubernetes
- Thorough understanding of operating systems and networking
- Theoretical and practical understanding of Infrastructure-as-code and Platform-as-a-service concepts
- Ability to understand and work with any service, tool or API as needed
- Ability to understand implementation of open source products and modify them if necessary
- Ability to visualize large scale distributed systems and debug issues or make changes to said systems
- Understanding and practical experience in managing CI/CD
What we offer:
- A competitive salary on par with top market standards
- Group Medical Insurance (Family Floater Plan - Self + Spouse + 2 Dependent Children)
- Sum Insured: INR 5,00,000/-
- Maternity cover upto two children
- Inclusive of COVID-19 Coverage
- Cashless & Reimbursement facility
- Access to free online doctor consultation
- Personal Accident Policy (Disability Insurance) -
- Sum Insured: INR. 25,00,000/- Per Employee
- Accidental Death and Permanent Total Disability is covered up to 100% of Sum Insured
- Permanent Partial Disability is covered as per the scale of benefits decided by the Insurer
- Temporary Total Disability is covered
- An option of Paytm Food Wallet (up to Rs. 2500) as a tax saver benefit
- Monthly Internet Reimbursement of upto Rs. 1,000
- Opportunity to pursue Executive Programs/ courses at top universities globally
- Professional Development opportunities through various MTX sponsored certifications on multiple technology stacks including Salesforce, Google Cloud, Amazon & others
***********************
We are looking for a Sr. Engineer DevOps and SysOps, who is responsible for managing AWS and Azure cloud computing. Your primary focus would be to help multiple projects with various cloud service implementation, create and manage CI/CD pipelines for deployment, explore new services on cloud and help projects to implement them.
Technical Requirements & Responsibilities
- Have 4+ years’ experience as a DevOps and SysOps Engineer.
- Apply cloud computing skills to deploy upgrades and fixes on AWS and Azure (GCP is optional / Good to have).
- Design, develop, and implement software integrations based on user feedback.
- Troubleshoot production issues and coordinate with the development team to streamline code deployment.
- Implement automation tools and frameworks (CI/CD pipelines).
- Analyze code and communicate detailed reviews to development teams to ensure a marked improvement in applications and the timely completion of projects.
- Collaborate with team members to improve the company’s engineering tools, systems and procedures, and data security.
- Optimize the company’s computing architecture.
- Conduct systems tests for security, performance, and availability.
- Develop and maintain design and troubleshooting documentation.
- Expert in code deployment tools (Puppet, Ansible, and Chef).
- Can maintain Java / PHP / Ruby on Rail / DotNet web applications.
- Experience in network, server, and application-status monitoring.
- Possess a strong command of software-automation production systems (Jenkins and Selenium).
- Expertise in software development methodologies.
- You have working knowledge of known DevOps tools like Git and GitHub.
- Possess a problem-solving attitude.
- Can work independently and as part of a team.
Soft Skills Requirements
- Strong communication skills
- Agility and quick learner
- Attention to detail
- Organizational skills
- Understanding of the Software development life cycle
- Good Analytical and problem-solving skills
- Self-motivated with the ability to prioritize, meet deadlines, and manage changing priorities
- Should have a high level of energy working as an individual contributor and as a part of team.
- Good command over verbal and written English communication

