Looking for an experienced candidate with strong development and programming experience, knowledge preferred-
- Cloud computing (i.e. Kubernetes, AWS, Google Cloud, Azure)
- Coming from a strong development background and has programming experience with Java and/or NodeJS (other programming languages such as Groovy/python are a big bonus)
- Proficient with Unix systems and bash
- Proficient with git/GitHub/GitLab/bitbucket
Desired skills-
- Docker
- Kubernetes
- Jenkins
- Experience in any scripting language (Phyton, Shell Scripting, Java Script)
- NGINX / Load Balancer
- Splunk / ETL tools
About IntelliFlow Solutions Pvt Ltd
About
IntelliFlow is a next gen low-code no-code SaaS Platform for companies to design, build and deploy enterprise applications at scale and faster. It allows citizen developers to build anything they can imagine, from enterprise-grade apps that help companies to run smoothly to process automation. IntelliFlow is a low-code platform for everyone to make business better, go-live sooner and get to success faster. It provides modern visual interfaces for both citizen and enterprise developers to drag and drop application components, connect them together to build modern web/ mobile applications. Its powerful features such as form builder, workflow designer, rule engine, data modeler, etc. allows users to design, build, deploy, manage and track business applications quickly and efficiently.
Tech stack
Candid answers by the company
IntelliFlow is a next gen low-code no-code SaaS Platform for companies to design, build and deploy enterprise applications at scale and faster. It allows citizen developers to build anything they can imagine, from enterprise-grade apps that help companies to run smoothly to process automation. IntelliFlow is a low-code platform for everyone to make business better, go-live sooner and get to success faster. It provides modern visual interfaces for both citizen and enterprise developers to drag and drop application components, connect them together to build modern web/ mobile applications. Its powerful features such as form builder, workflow designer, rule engine, data modeler, etc. allows users to design, build, deploy, manage and track business applications quickly and efficiently.
Similar jobs
About the Role:
We are seeking a talented and passionate DevOps Engineer to join our dynamic team. You will be responsible for designing, implementing, and managing scalable and secure infrastructure across multiple cloud platforms. The ideal candidate will have a deep understanding of DevOps best practices and a proven track record in automating and optimizing complex workflows.
Key Responsibilities:
Cloud Management:
- Design, implement, and manage cloud infrastructure on AWS, Azure, and GCP.
- Ensure high availability, scalability, and security of cloud resources.
Containerization & Orchestration:
- Develop and manage containerized applications using Docker.
- Deploy, scale, and manage Kubernetes clusters.
CI/CD Pipelines:
- Build and maintain robust CI/CD pipelines to automate the software delivery process.
- Implement monitoring and alerting to ensure pipeline efficiency.
Version Control & Collaboration:
- Manage code repositories and workflows using Git.
- Collaborate with development teams to optimize branching strategies and code reviews.
Automation & Scripting:
- Automate infrastructure provisioning and configuration using tools like Terraform, Ansible, or similar.
- Write scripts to optimize and maintain workflows.
Monitoring & Logging:
- Implement and maintain monitoring solutions to ensure system health and performance.
- Analyze logs and metrics to troubleshoot and resolve issues.
Required Skills & Qualifications:
- 3-5 years of experience with AWS, Azure, and Google Cloud Platform (GCP).
- Proficiency in containerization tools like Docker and orchestration tools like Kubernetes.
- Hands-on experience building and managing CI/CD pipelines.
- Proficient in using Git for version control.
- Experience with scripting languages such as Bash, Python, or PowerShell.
- Familiarity with infrastructure-as-code tools like Terraform or CloudFormation.
- Solid understanding of networking, security, and system administration.
- Excellent problem-solving and troubleshooting skills.
- Strong communication and teamwork skills.
Preferred Qualifications:
- Certifications such as AWS Certified DevOps Engineer, Azure DevOps Engineer, or Google Professional DevOps Engineer.
- Experience with monitoring tools like Prometheus, Grafana, or ELK Stack.
- Familiarity with serverless architectures and microservices.
Job Description-
We are seeking a highly skilled Solution Architect to join our dynamic team in Tokyo. The ideal candidate will have substantial experience in designing, implementing, and deploying cutting-edge solutions involving Machine Learning (ML), Cloud Computing, Full Stack Development, and Kubernetes. The Solution Architect will play a key role in architecting and delivering innovative solutions that meet business objectives while leveraging advanced technologies and industry best practices.
Job Title: Solution Architect ML, Cloud
Experience:5-10 years
Client Location: Bangalore
Work Location: Tokyo, Japan (Onsite)
We are seeking a highly skilled Solution Architect to join our dynamic team in Tokyo. The ideal candidate will have substantial experience in designing, implementing, and deploying cutting-edge solutions involving Machine Learning (ML), Cloud Computing, Full Stack Development, and Kubernetes. The Solution Architect will play a key role in architecting and delivering innovative solutions that meet business objectives while leveraging advanced technologies and industry best practices.
Job Title: Solution Architect (ML, Cloud)
Location: Tokyo, Japan (Onsite)
Experience: 5-10 years
Overview: We are looking for a skilled Solution Architect with expertise in Machine Learning (ML), Cloud Computing, Full Stack Development, and Kubernetes to join our team in Tokyo. The ideal candidate will be responsible for designing and implementing cutting-edge, scalable solutions while leveraging the latest technologies and best practices to meet business objectives.
Key Responsibilities:
Collaborate with stakeholders to understand business needs and develop scalable, efficient technical solutions.
Architect and implement complex systems integrating Machine Learning, Cloud platforms (AWS, Azure, Google Cloud), and Full Stack Development.
Lead the development and deployment of cloud-native applications using NoSQL databases, Python, and Kubernetes.
Design and optimize algorithms to improve performance, scalability, and reliability of solutions.
Review, validate, and refine architecture to ensure flexibility, scalability, and cost-efficiency.
Mentor development teams and ensure adherence to best practices for coding, testing, and deployment.
Contribute to the development of technical documentation and solution roadmaps.
Stay up-to-date with emerging technologies and continuously improve solution design processes.
Required Skills & Qualifications:
5-10 years of experience as a Solution Architect or similar role with expertise in ML, Cloud, and Full Stack Development.
Proficiency in at least two major cloud platforms (AWS, Azure, Google Cloud).
Solid experience with Kubernetes for container orchestration and deployment.
Hands-on experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB).
Expertise in Python and ML frameworks like TensorFlow, PyTorch, etc.
Practical experience implementing at least two real-world algorithms (e.g., classification, clustering, recommendation systems).
Strong knowledge of scalable architecture design and cloud-native application development.
Familiarity with CI/CD tools and DevOps practices.
Excellent problem-solving abilities and the ability to thrive in a fast-paced environment.
Strong communication and collaboration skills with cross-functional teams.
Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
Preferred Qualifications:
Experience with microservices and containerization.
Knowledge of distributed systems and high-performance computing.
Cloud certifications (AWS Certified Solutions Architect, Google Cloud Professional Architect, etc.).
Familiarity with Agile methodologies and Scrum.
Japanese language proficiency is an added advantage (but not mandatory).
Skills : ML, Cloud (any two major clouds), algorithms (two algorithms must be implemented), full stack, kubernatics, no sql, Python
Responsibilities:
- Collaborate with stakeholders to understand business needs and translate them into scalable and efficient technical solutions.
- Design and implement complex systems involving Machine Learning, Cloud Computing (at least two major clouds such as AWS, Azure, or Google Cloud), and Full Stack Development.
- Lead the design, development, and deployment of cloud-native applications with a focus on NoSQL databases, Python, and Kubernetes.
- Implement algorithms and provide scalable solutions, with a focus on performance optimization and system reliability.
- Review, validate, and improve architectures to ensure high scalability, flexibility, and cost-efficiency in cloud environments.
- Guide and mentor development teams, ensuring best practices are followed in coding, testing, and deployment.
- Contribute to the development of technical documentation and roadmaps.
- Stay up-to-date with emerging technologies and propose enhancements to the solution design process.
Key Skills & Requirements:
- Proven experience 5-10 years) as a Solution Architect or similar role, with deep expertise in Machine Learning, Cloud Architecture, and Full Stack Development.
- Expertise in at least two major cloud platforms (AWS, Azure, Google Cloud).
- Solid experience with Kubernetes for container orchestration and deployment.
- Strong hands-on experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB, etc.).
- Proficiency in Python, including experience with ML frameworks (such as TensorFlow, PyTorch, etc.) and libraries for algorithm development.
- Must have implemented at least two algorithms (e.g., classification, clustering, recommendation systems, etc.) in real-world applications.
- Strong experience in designing scalable architectures and applications from the ground up.
- Experience with DevOps and automation tools for CI/CD pipelines.
- Excellent problem-solving skills and ability to work in a fast-paced environment.
- Strong communication skills and ability to collaborate with cross-functional teams.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
Preferred Skills:
- Experience with microservices architecture and containerization.
- Knowledge of distributed systems and high-performance computing.
- Certifications in cloud platforms (AWS Certified Solutions Architect, Google Cloud Professional Cloud Architect, etc.).
- Familiarity with Agile methodologies and Scrum.
- Knowing Japanese Language is an additional advantage for the Candidate. Not mandatory.
What the role needs
● Review of current DevOps infrastructure & redefine code merging strategy as per product roll out objectives
● Define deploy frequency strategy based on product roadmap document and ongoing product market fit relate tweaks and changes
● Architect benchmark docker configurations based on planned stack
● Establish uniformity of environment across developer machine to multiple production environments
● Plan & execute test automation infrastructure
● Setup automated stress testing environment
● Plan and execute logging & stack trace tools
● Review DevOps orchestration tools & choices
● Coordination with external data centers and AWS in the event of provisioning, outages or maintenance.
Requirements
● Extensive experience with AWS cloud infrastructure deployment and monitoring
● Advanced knowledge of programming languages such as Python and golang, and writing code and scripts
● Experience with Infrastructure as code & devops management tools - Terraform, Packer for devops asset management for monitoring, infrastructure cost estimations, and Infrastructure version management
● Configure and manage data sources like MySQL, MongoDB, Elasticsearch, Redis, Cassandra, Hadoop, etc
● Experience with network, infrastructure and OWASP security standards
● Experience with web server configurations - Nginx, HAProxy, SSL configurations with AWS, understanding & management of sub-domain based product rollout for clients .
● Experience with deployment and monitoring of event streaming & distributing technologies and tools - Kafka, RabbitMQ, NATS.io, socket.io
● Understanding & experience of Disaster Recovery Plan execution
● Working with other senior team members to devise and execute strategies for data backup and storage
● Be aware of current CVEs, potential attack vectors, and vulnerabilities, and apply patches as soon as possible
● Handle incident responses, troubleshooting and fixes for various services
About us
Classplus is India's largest B2B ed-tech start-up, enabling 1 Lac+ educators and content creators to create their digital identity with their own branded apps. Starting in 2018, we have grown more than 10x in the last year, into India's fastest-growing video learning platform.
Over the years, marquee investors like Tiger Global, Surge, GSV Ventures, Blume, Falcon, Capital, RTP Global, and Chimera Ventures have supported our vision. Thanks to our awesome and dedicated team, we achieved a major milestone in March this year when we secured a “Series-D” funding.
Now as we go global, we are super excited to have new folks on board who can take the rocketship higher🚀. Do you think you have what it takes to help us achieve this? Find Out Below!
What will you do?
· Define the overall process, which includes building a team for DevOps activities and ensuring that infrastructure changes are reviewed from an architecture and security perspective
· Create standardized tooling and templates for development teams to create CI/CD pipelines
· Ensure infrastructure is created and maintained using terraform
· Work with various stakeholders to design and implement infrastructure changes to support new feature sets in various product lines.
· Maintain transparency and clear visibility of costs associated with various product verticals, environments and work with stakeholders to plan for optimization and implementation
· Spearhead continuous experimenting and innovating initiatives to optimize the infrastructure in terms of uptime, availability, latency and costs
You should apply, if you
1. Are a seasoned Veteran: Have managed infrastructure at scale running web apps, microservices, and data pipelines using tools and languages like JavaScript(NodeJS), Go, Python, Java, Erlang, Elixir, C++ or Ruby (experience in any one of them is enough)
2. Are a Mr. Perfectionist: You have a strong bias for automation and taking the time to think about the right way to solve a problem versus quick fixes or band-aids.
3. Bring your A-Game: Have hands-on experience and ability to design/implement infrastructure with GCP services like Compute, Database, Storage, Load Balancers, API Gateway, Service Mesh, Firewalls, Message Brokers, Monitoring, Logging and experience in setting up backups, patching and DR planning
4. Are up with the times: Have expertise in one or more cloud platforms (Amazon WebServices or Google Cloud Platform or Microsoft Azure), and have experience in creating and managing infrastructure completely through Terraform kind of tool
5. Have it all on your fingertips: Have experience building CI/CD pipeline using Jenkins, Docker for applications majorly running on Kubernetes. Hands-on experience in managing and troubleshooting applications running on K8s
6. Have nailed the data storage game: Good knowledge of Relational and NoSQL databases (MySQL,Mongo, BigQuery, Cassandra…)
7. Bring that extra zing: Have the ability to program/script is and strong fundamentals in Linux and Networking.
8. Know your toys: Have a good understanding of Microservices architecture, Big Data technologies and experience with highly available distributed systems, scaling data store technologies, and creating multi-tenant and self hosted environments, that’s a plus
Being Part of the Clan
At Classplus, you’re not an “employee” but a part of our “Clan”. So, you can forget about being bound by the clock as long as you’re crushing it workwise😎. Add to that some passionate people working with and around you, and what you get is the perfect work vibe you’ve been looking for!
It doesn’t matter how long your journey has been or your position in the hierarchy (we don’t do Sirs and Ma’ams); you’ll be heard, appreciated, and rewarded. One can say, we have a special place in our hearts for the Doers! ✊🏼❤️
Are you a go-getter with the chops to nail what you do? Then this is the place for you.
Bito is a startup that is using AI (ChatGPT, OpenAI, etc) to create game-changing productivity experiences for software developers in their IDE and CLI. Already, over 100,000 developers are using Bito to increase their productivity by 31% and performing more than 1 million AI requests per week.
Our founders have previously started, built, and taken a company public (NASDAQ: PUBM), worth well over $1B. We are looking to take our learnings, learn a lot along with you, and do something more exciting this time. This journey will be incredibly rewarding, and is incredibly difficult!
We are building this company with a fully remote approach, with our main teams for time zone management in the US and in India. The founders happen to be in Silicon Valley and India.
We are hiring a DevOps Engineer to join our team.
Responsibilities:
- Collaborate with the development team to design, develop, and implement Java-based applications
- Perform analysis and provide recommendations for Cloud deployments and identify opportunities for efficiency and cost reduction
- Build and maintain clusters for various technologies such as Aerospike, Elasticsearch, RDS, Hadoop, etc
- Develop and maintain continuous integration (CI) and continuous delivery (CD) frameworks
- Provide architectural design and practical guidance to software development teams to improve resilience, efficiency, performance, and costs
- Evaluate and define/modify configuration management strategies and processes using Ansible
- Collaborate with DevOps engineers to coordinate work efforts and enhance team efficiency
- Take on leadership responsibilities to influence the direction, schedule, and prioritization of the automation effort
Requirements:
- Minimum 4+ years of relevant work experience in a DevOps role
- At least 3+ years of experience in designing and implementing infrastructure as code within the AWS/GCP/Azure ecosystem
- Expert knowledge of any cloud core services, big data managed services, Ansible, Docker, Terraform/CloudFormation, Amazon ECS/Kubernetes, Jenkins, and Nginx
- Expert proficiency in at least two scripting/programming languages such as Bash, Perl, Python, Go, Ruby, etc.
- Mastery in configuration automation tool sets such as Ansible, Chef, etc
- Proficiency with Jira, Confluence, and Git toolset
- Experience with automation tools for monitoring and alerts such as Nagios, Grafana, Graphite, Cloudwatch, New Relic, etc
- Proven ability to manage and prioritize multiple diverse projects simultaneously
What do we offer:
At Bito, we strive to create a supportive and rewarding work environment that enables our employees to thrive. Join a dynamic team at the forefront of generative AI technology.
· Work from anywhere
· Flexible work timings
· Competitive compensation, including stock options
· A chance to work in the exciting generative AI space
· Quarterly team offsite events
- Hands-on knowledge on various CI-CD tools (Jenkins/TeamCity, Artifactory, UCD, Bitbucket/Github, SonarQube) including setting up of build-deployment automated pipelines.
- Very good knowledge in scripting tools and languages such as Shell, Perl or Python , YAML/Groovy, build tools such as Maven/Gradle.
- Hands-on knowledge in containerization and orchestration tools such as Docker, OpenShift and Kubernetes.
- Good knowledge in configuration management tools such as Ansible, Puppet/Chef and have worked on setting up of monitoring tools (Splunk/Geneos/New Relic/Elk).
- Expertise in job schedulers/workload automation tools such as Control-M or AutoSys is good to have.
- Hands-on knowledge on Cloud technology (preferably GCP) including various computing services and infrastructure setup using Terraform.
- Should have basic understanding on networking, certificate management, Identity and Access Management and Information security/encryption concepts.
- • Should support day-to-day tasks related to platform and environments upkeep such as upgrades, patching, migration and system/interfaces integration.
- • Should have experience in working in Agile based SDLC delivery model, multi-task and support multiple systems/apps.
- • Big-data and Hadoop ecosystem knowledge is good to have but not mandatory.
- Should have worked on standard release, change and incident management tools such as ServiceNow/Remedy or similar
The AWS Cloud/Devops Engineer will be working with the engineering team and focusing on AWS infrastructure and automation. A key part of the role is championing and leading infrastructure as code. The Engineer will work closely with the Manager of Operations and Devops to build, manage and automate our AWS infrastructure.
Duties & Responsibilities:
- Design cloud infrastructure that is secure, scalable, and highly available on AWS
- Work collaboratively with software engineering to define infrastructure and deployment requirements
- Provision, configure and maintain AWS cloud infrastructure defined as code
- Ensure configuration and compliance with configuration management tools
- Administer and troubleshoot Linux based systems
- Troubleshoot problems across a wide array of services and functional areas
- Build and maintain operational tools for deployment, monitoring, and analysis of AWS infrastructure and systems
- Perform infrastructure cost analysis and optimization
Qualifications:
- At least 1-5 years of experience building and maintaining AWS infrastructure (VPC, EC2, Security Groups, IAM, ECS, CodeDeploy, CloudFront, S3)
- Strong understanding of how to secure AWS environments and meet compliance requirements
- Expertise using Chef for configuration management
- Hands-on experience deploying and managing infrastructure with Terraform
- Solid foundation of networking and Linux administration
- Experience with CI-CD, Docker, GitLab, Jenkins, ELK and deploying applications on AWS
- Ability to learn/use a wide variety of open source technologies and tools
- Strong bias for action and ownership
What is the role?
As DevOps Engineer, you are responsible to setup and maintain GIT repository, DevOps tools like Jenkins, UCD, Docker, Kubernetes, Jfrog Artifactory, Cloud monitoring tools, Cloud security.
Key Responsibilities
- Setup, configure, and maintain GIT repos, Jenkins, UCD, etc. for multi hosting cloud environments.
- Architect and maintain the server infrastructure in AWS. Build highly resilient infrastructure following industry best practices.
- Working on Docker images and maintaining Kubernetes clusters.
- Develop and maintain the automation scripts using Ansible or other available tools.
- Maintain and monitor cloud Kubernetes Clusters and patching when necessary.
- Working on Cloud security tools to keep applications secured.
- Participate in software development lifecycle, specifically infra design, execution, and debugging required to achieve successful implementation of integrated solutions within the portfolio.
- Required Technical and Professional Expertise.
What are we looking for?
- Minimum 4-6 years of experience in IT industry.
- Expertise in implementing and managing Devops CI/CD pipeline.
- Experience in DevOps automation tools. And Very well versed with DevOps Frameworks, Agile.
- Working knowledge of scripting using shell, Python, Terraform, Ansible or puppet or chef.
- Experience and good understanding in any of Cloud like AWS, Azure, Google cloud.
- Knowledge of Docker and Kubernetes is required.
- Proficient in troubleshooting skills with proven abilities in resolving complex technical issues.
- Experience with working with ticketing tools.
- Middleware technologies knowledge or database knowledge is desirable.
- Experience and well versed with Jira tool is a plus.
What can you look for?
A wholesome opportunity in a fast-paced environment will enable you to juggle between concepts yet maintain the quality of content, interact, share your ideas, and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits of being at Xoxoday.
We are
A fast-growing SaaS commerce company based in Bangalore with offices in Delhi, Mumbai, SF, Dubai, Singapore, and Dublin. We have three products in our portfolio: Plum, Empuls, and Compass. Xoxoday works with over 1000 global clients. We help our clients engage and motivate their employees, sales teams, channel partners, or consumers for better business results.
Way forward
We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. However, we assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status.
DevOps Consultant!! MERN Stack Project Manager – Systems (Enterprise or Solutions) Architect needed!
Hello superstar,
I appreciate you taking time to read this. I have posted a job for developers to work on a start-up, the link is ......
I would need someone with DevOps experience, to ensure that the project is undertaken with the highest standards possible. I have had many experiences where ‘completed’ software after years of development was filled with bugs and it would be more cost-effective to start from scratch than to attempt to find and correct all the bugs.
I have attempted to learn as much as possible, but I now have an opportunity and it would better serve the venture to have someone handle the management of the project to ensure that;
- We choose the most appropriate technology
- We choose competent developers in those technologies
- The architecture and data modeling are clearly defined in a ‘blueprint’ plan
- A DevOps environment and processes are set up and the developers understand what is required
- Proper tests are carried out to ensure everything works as intended
- There are processes for testers to follow and competent testers are selected to follow them
- Accessibility, localization, and internationalization are planned ahead of time
- Security, scalability, and other future probabilities that I may not even be aware of are considered and planned ahead of time
- Documentation and code reviews, refactoring and other quality assurance processes are undertaken
- Working software is produced and systems that enable new developers or teams of people to easily take over and/or contribute new modules or updates in a controlled and organized fashion
- Cost estimates or budgets/projections or use of SaaS, hosting and other 3rd party services and applications
I am more concerned with a professional and world-class organizational system than with any particular type of software been produced as the strong foundation will enable anything to be creating with efficacy and precision.
Again, thank you for reading this, please reply with the word “superstar” anywhere in the second line of your response. I look forward to hearing from you.
Warm wishes DevOps Evangelist,