
About NomiSo India: Nomiso is a product and services engineering company. We are a team of Software Engineers, Architects, Managers, and Cloud Experts with expertise in Technology and Delivery Management.
Our mission is to Empower and Enhance the lives of our customers through simple solutions for their complex business problems.
At NomiSo, we encourage entrepreneurial spirit - to learn, grow and improve. A great workplace thrives on ideas and opportunities. That is a part of our DNA. We’re in pursuit of colleagues who share similar passions, are nimble, and thrive when challenged. We offer a positive, stimulating, and fun environment – with opportunities to grow, a fast-paced approach to innovation, and a place where your views are valued and encouraged.
We invite you to push your boundaries and join us in fulfilling your career aspirations!
What You Can Expect from Us:
We work hard to provide our team with the best opportunities to grow their careers. You can expect to be a pioneer of ideas, a student of innovation, and a leader of thought. Innovation and thought leadership is at the center of everything we do at all levels of the company. Let’s make your career great!
Position Overview:
You will be responsible for creating and managing databases like MongoDB, Maria, and Oracle databases.We greatly value teamwork, so working with developers, debugging, and helping them tune their queries is a highly valued facet of the DBE role. You never will be working “alone” you will have the skillset of many talented engineers and system admins to draw upon when you need it.
Roles and Responsibilities:
- Assist in design and development of database systems.
- Optimize database systems for performance and reliability.
- Perform database maintenance and troubleshooting activities.
- Test database systems and perform bug fixes.
- Provide database solutions based on technical documents and business requirements.
- Develop database functions, scripts, stored procedures and triggers to support application development.
- Provide technical assistance to resolve all database issues related to performance, capacity and access.
- Ensure data integrity and quality in database systems.
- Maintain standard policies for database development activities.
- Identify and rectify database errors in a timely manner.
- Create physical and logical database models as per the business requirements.
- Manage and monitor performance, capacity and security of database systems.
- Prepare documentations regarding database design, configuration and change management tasks.
- Mentor database administrators to manage the company databases effectively.
- Perform data back-up and archival on a regular basis.
- Willing to take on challenging tasks under pressure.
- Provide 24x7 on-call support in rotation.
Must Have Skills:
- Candidate with overall 8+ years experience with 3+years in database engineering, administration, and support and should have a minimum of 3+ years specifically in Mongo, Maria, and Oracle DB databases.
- Expertise in Mongo/Maria/MySQL DB administration in a production environment. Must have hands-on NoSQL experience from evaluation of new frameworks to deployment, maintenance, and performance tuning of clusters.
- Solid understanding of SQL and NoSQL landscape and available frameworks.
- Strong in Linux and network maintenance.
- Proficiency in one or more scripting languages, including Perl, Python, and Shell.
- Strong SQL skills and experience.
Good to Have Skills:
- Proficiency with Elastic and Redis is a big plus.
Qualification:
- Bachelor of Science in Computer Science or equivalent technical training and professional work experience.
Location:
- Bangalore
Website: https://www.nomiso.io/

About Nomiso
About
Similar jobs
Key Responsibilities:
Cloud Management:
- Manage and troubleshoot Linux environments.
- Create and manage Linux users on EC2 instances.
- Handle AWS services, including ECR, EKS, EC2, SNS, SES, S3, RDS,
- Lambda, DocumentDB, IAM, ECS, EventBridge, ALB, and SageMaker.
- Perform start/stop operations for SageMaker and EC2 instances.
- Solve IAM permission issues.
Containerization and Deployment:
- Create and manage ECS services.
- Implement IP whitelisting for enhanced security.
- Configure target mapping for load balancers and manage Glue jobs.
- Create load balancers (as needed).
CI/CD Setup:
- Set up and maintain CI/CD pipelines using AWS CodeCommit, CodeBuild, CodeDeploy, and CodePipeline.
Database Management:
- o Manage PostgreSQL RDS instances, ensuring optimal performance and security.
Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- Minimum 3.5 years of experience in a AWS and DevOps.
- Strong experience with Linux administration.
- Proficient in AWS services, particularly ECR, EKS, EC2, SNS, SES, S3, RDS, DocumentDB, IAM, ECS, EventBridge, ALB, and SageMaker.
- Experience with CI/CD tools (AWS CodeCommit, CodeBuild, CodeDeploy, CodePipeline).
- Familiarity with PostgreSQL and database management.
Responsibilities
- Implement various development, testing, automation tools, and IT infrastructure
- Design, build and automate the AWS infrastructure (VPC, EC2, Networking, EMR, RDS, S3, ALB, Cloud Front, etc.) using Terraform
- Manage end-to-end production workloads hosted on Docker and AWS
- Automate CI pipeline using Groovy DSL
- Deploy and configure Kubernetes clusters (EKS)
- Design and build a CI/CD Pipeline to deploy applications using Jenkins and Docker
Eligibility
- At least 8 years of proven experience in AWS-based DevOps/cloud engineering and implementations
- Expertise in all common AWS Cloud services like EC2, EKS, S3, VPC, Lambda, API Gateway, ALB, Redis, etc.
- Experience in deploying and managing production environments in Amazon AWS
- Strong experience in continuous integration and continuous deployment
- Knowledge of application build, deployment, and configuration using one of the tools: Jenkins
Description
Do you dream about code every night? If so, we’d love to talk to you about a new product that we’re making to enable delightful testing experiences at scale for development teams who build modern software solutions.
What You'll Do
Troubleshooting and analyzing technical issues raised by internal and external users.
Working with Monitoring tools like Prometheus / Nagios / Zabbix.
Developing automation in one or more technologies such as Terraform, Ansible, Cloud Formation, Puppet, Chef will be preferred.
Monitor infrastructure alerts and take proactive action to avoid downtime and customer impacts.
Working closely with the cross-functional teams to resolve issues.
Test, build, design, deployment, and ability to maintain continuous integration and continuous delivery process using tools like Jenkins, maven Git, etc.
Work in close coordination with the development and operations team such that the application is in line with performance according to the customer's expectations.
What you should have
Bachelor’s or Master’s degree in computer science or any related field.
3 - 6 years of experience in Linux / Unix, cloud computing techniques.
Familiar with working on cloud and datacenter for enterprise customers.
Hands-on experience on Linux / Windows / Mac OS’s and Batch/Apple/Bash scripting.
Experience with various databases such as MongoDB, PostgreSQL, MySQL, MSSQL.
Familiar with AWS technologies like EC2, S3, Lambda, IAM, etc.
Must know how to choose the best tools and technologies which best fit the business needs.
Experience in developing and maintaining CI/CD processes using tools like Git, GitHub, Jenkins etc.
Excellent organizational skills to adapt to a constantly changing technical environment
About us:
HappyFox is a software-as-a-service (SaaS) support platform. We offer an enterprise-grade help desk ticketing system and intuitively designed live chat software.
We serve over 12,000 companies in 70+ countries. HappyFox is used by companies that span across education, media, e-commerce, retail, information technology, manufacturing, non-profit, government and many other verticals that have an internal or external support function.
To know more, Visit! - https://www.happyfox.com/
Responsibilities
- Build and scale production infrastructure in AWS for the HappyFox platform and its products.
- Research, Build/Implement systems, services and tooling to improve uptime, reliability and maintainability of our backend infrastructure. And to meet our internal SLOs and customer-facing SLAs.
- Implement consistent observability, deployment and IaC setups
- Lead incident management and actively respond to escalations/incidents in the production environment from customers and the support team.
- Hire/Mentor other Infrastructure engineers and review their work to continuously ship improvements to production infrastructure and its tooling.
- Build and manage development infrastructure, and CI/CD pipelines for our teams to ship & test code faster.
- Lead infrastructure security audits
Requirements
- At least 7 years of experience in handling/building Production environments in AWS.
- At least 3 years of programming experience in building API/backend services for customer-facing applications in production.
- Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
- Proficient in writing automation scripts or building infrastructure tools using Python/Ruby/Bash/Golang
- Experience in deploying and managing production Python/NodeJS/Golang applications to AWS EC2, ECS or EKS.
- Experience in security hardening of infrastructure, systems and services.
- Proficient in containerised environments such as Docker, Docker Compose, Kubernetes
- Experience in setting up and managing test/staging environments, and CI/CD pipelines.
- Experience in IaC tools such as Terraform or AWS CDK
- Exposure/Experience in setting up or managing Cloudflare, Qualys and other related tools
- Passion for making systems reliable, maintainable, scalable and secure.
- Excellent verbal and written communication skills to address, escalate and express technical ideas clearly
- Bonus points – Hands-on experience with Nginx, Postgres, Postfix, Redis or Mongo systems.

Ask any CIO about corporate data and they’ll happily share all the work they’ve done to make their databases secure and compliant. Ask them about other sensitive information, like contracts, financial documents, and source code, and you’ll probably get a much less confident response. Few organizations have any insight into business-critical information stored in unstructured data.
There was a time when that didn’t matter. Those days are gone. Data is now accessible, copious, and dispersed, and it includes an alarming amount of business-critical information. It’s a target for both cybercriminals and regulators but securing it is incredibly difficult. It’s the data challenge of our generation.
Existing approaches aren’t doing the job. Keyword searches produce a bewildering array of possibly relevant documents that may or may not be business critical. Asking users to categorize documents requires extensive training and constant vigilance to make sure users are doing their part. What’s needed is an autonomous solution that can find and assess risk so you can secure your unstructured data wherever it lives.
That’s our mission. Concentric’s semantic intelligence solution reveals the meaning in your structured and unstructured data so you can fight off data loss and meet compliance and privacy mandates.
Check out our core cultural values and behavioural tenets here: https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/" target="_blank">https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/
Title: Cloud DevOps Engineer
Role: Individual Contributor (4-8 yrs)
Requirements:
- Energetic self-starter, a fast learner, with a desire to work in a startup environment
- Experience working with Public Clouds like AWS
- Operating and Monitoring cloud infrastructure on AWS.
- Primary focus on building, implementing and managing operational support
- Design, Develop and Troubleshoot Automation scripts (Configuration/Infrastructure as code or others) for Managing Infrastructure.
- Expert at one of the scripting languages – Python, shell, etc
- Experience with Nginx/HAProxy, ELK Stack, Ansible, Terraform, Prometheus-Grafana stack, etc
- Handling load monitoring, capacity planning, and services monitoring.
- Proven experience With CICD Pipelines and Handling Database Upgrade Related Issues.
- Good Understanding and experience in working with Containerized environments like Kubernetes and Datastores like Cassandra, Elasticsearch, MongoDB, etc
This company is a network of the world's best developers - full-time, long-term remote software jobs with better compensation and career growth. We enable our clients to accelerate their Cloud Offering, and Capitalize on Cloud. We have our own IOT/AI platform and we provide professional services on that platform to build custom clouds for their IOT devices. We also build mobile apps, run 24x7 devops/site reliability engineering for our clients.
We are looking for very hands-on SRE (Site Reliability Engineering) engineers with 3 to 6 years of experience. The person will be part of team that is responsible for designing & implementing automation from scratch for medium to large scale cloud infrastructure and providing 24x7 services to our North American / European customers. This also includes ensuring ~100% uptime for almost 50+ internal sites. The person is expected to deliver with both high speed and high quality as well as work for 40 Hours per week (~6.5 hours per day, 6 days per week) in shifts which will rotate every month.
This person MUST have:
- B.E Computer Science or equivalent
- 2+ Years of hands-on experience troubleshooting/setting up of the Linux environment, who can write shell scripts for any given requirement.
- 1+ Years of hands-on experience setting up/configuring AWS or GCP services from SCRATCH and maintaining them.
- 1+ Years of hands-on experience setting up/configuring Kubernetes & EKS and ensuring high availability of container orchestration.
- 1+ Years of hands-on experience setting up CICD from SCRATCH in Jenkins & Gitlab.
- Experience configuring/maintaining one monitoring tool.
- Excellent verbal & written communication skills.
- Candidates with certifications - AWS, GCP, CKA, etc will be preferred
- Hands-on experience with databases (Cassandra, MongoDB, MySQL, RDS).
Experience:
- Min 3 years of experience as SRE automation engineer building, running, and maintaining production sites. Not looking for candidates who have experience only as L1/L2 or Build & Deploy..
Location:
- Remotely, anywhere in India
Timings:
- The person is expected to deliver with both high speed and high quality as well as work for 40 Hours per week (~6.5 hours per day, 6 days per week) in shifts which will rotate every month.
Position:
- Full time/Direct
- We have great benefits such as PF, medical insurance, 12 annual company holidays, 12 PTO leaves per year, annual increments, Diwali bonus, spot bonuses and other incentives etc.
- We dont believe in locking in people with large notice periods. You will stay here because you love the company. We have only a 15 days notice period.
Requirements
- Experience: 3-5 Years
- Scripting: PowerShell and either of ( JavaScript, Python)
- Kubernetes and docker Hands-On
- Good to have either (Azure / AWS )
- Any of DB technologies Hands-ON: No SQL - Admin, COSMOS DB, MONGO DB, Maria DBA
- Good to have Analytics knowledge
Your skills and experience should cover:
-
5+ years of experience with developing, deploying, and debugging solutions on the AWS platform using ALL AWS services such as S3, IAM, Lambda, API Gateway, RDS, Cognito, Cloudtrail, CodePipeline, Cloud Formation, Cloudwatch and WAF (Web Application Firewall).
-
Amazon Web Services (AWS) Certified Developer: Associate, is required; Amazon Web Services (AWS) DevOps Engineer: Professional, preferred.
-
5+ years of experience using one or more modern programming languages (Python, Node.js).
-
Hands-on experience migrating data to the AWS cloud platform
-
Experience with Scrum/Agile methodology.
-
Good understanding of core AWS services, uses, and basic AWS architecture best practices (including security and scalability)
-
Experience with AWS Data Storage Tools.
-
Experience in Configure and implement AWS tools such as CloudWatch, CloudTrail and direct system logs for monitoring.
-
Experience working with GIT, or similar tools.
-
Ability to communicate and represent AWS Recommendations and Standards.
The following areas are highly advantageous:
-
Experience with Docker
-
Experience with PostgreSQL database


