

Laravel Developer Responsibilities:
- Discussing project aims with the client and development team.
- Designing and building web applications using Laravel.
- Troubleshooting issues in the implementation and debug builds.
- Working with front-end and back-end developers on projects.
- Testing functionality for users and the backend.
- Ensuring that integrations run smoothly.
- Scaling projects based on client feedback.
- Recording and reporting on work done in Laravel.
- Maintaining web-based applications.
- Presenting work in meetings with clients and management.
Laravel Developer Requirements:
- A degree in programming, computer science, or a related field.
- Experience working with PHP, performing unit testing, and managing APIs such as REST.
- A solid understanding of application design using Laravel.
- Knowledge of database design and querying using SQL.
- Proficiency in HTML and JavaScript.
- Practical experience using the MVC architecture.
- A portfolio of applications and programs to your name.
- Problem-solving skills and critical mindset.
- Great communication skills.
- The desire and ability to learn.
Note: We are hiring Tamil Nadu candidates only..

About Nexevo Technologies
About
Connect with the team
Similar jobs
- Responsible for building, managing, and maintaining deployment pipelines and developing self-service tooling formanaging Git, Linux, Kubernetes, Docker, CI/CD & Pipelining etc in cloud infrastructure
- Responsible for building and managing DevOps agile tool chain with
- Responsible for working as an integrator between developer teams and various cloud infrastructures.
Section 2
- Responsibilities include helping the development team with best practices, provisioning monitoring, troubleshooting, optimizing and tuning, automating and improving deployment and release processes.
Section 3
- Responsible for maintaining application security with perioding tracking and upgrading package dependencies in coordination with respective developer teams .
- Responsible for packaging and containerization of deploy units and strategizing it in coordination with developer team
Section 4
- Setting up tools and required infrastructure. Defining and setting development, test, release, update, and support processes for DevOps operation
- Responsible for documentation of the process.
- Responsible for leading projects with end to end execution
Qualification: Bachelors of Engineering /MCA Preferably with AWS Cloud certification
Ideal Candidate -
- is experienced between 2-4 years with AWS certification and DevOps
experience.
- age less than 30 years, self-motivated and enthusiastic.
- is interested in building a sustainable DevOps platform with maximum
automation
- is interested in learning and being challenged on day to day basis.
- who can take ownership of the tasks and is willing to take the necessary
action to get it done.
- who can solve complex problems.
- who is honest with their quality of work and is comfortable with taking
ownership of their success and failure, Both
Company Overview
Adia Health revolutionizes clinical decision support by enhancing diagnostic accuracy and personalizing care. It modernizes the diagnostic process by automating optimal lab test selection and interpretation, utilizing a combination of expert medical insights, real-world data, and artificial intelligence. This approach not only streamlines the diagnostic journey but also ensures precise, individualized patient care by integrating comprehensive medical histories and collective platform knowledge.
Position Overview
We are seeking a talented and experienced Site Reliability Engineer/DevOps Engineer to join our dynamic team. The ideal candidate will be responsible for ensuring the reliability, scalability, and performance of our infrastructure and applications. You will collaborate closely with development, operations, and product teams to automate processes, implement best practices, and improve system reliability.
Key Responsibilities
- Design, implement, and maintain highly available and scalable infrastructure solutions using modern DevOps practices.
- Automate deployment, monitoring, and maintenance processes to streamline operations and increase efficiency.
- Monitor system performance and troubleshoot issues, ensuring timely resolution to minimize downtime and impact on users.
- Implement and manage CI/CD pipelines to automate software delivery and ensure code quality.
- Manage and configure cloud-based infrastructure services to optimize performance and cost.
- Collaborate with development teams to design and implement scalable, reliable, and secure applications.
- Implement and maintain monitoring, logging, and alerting solutions to proactively identify and address potential issues.
- Conduct periodic security assessments and implement appropriate measures to ensure the integrity and security of systems and data.
- Continuously evaluate and implement new tools and technologies to improve efficiency, reliability, and scalability.
- Participate in on-call rotation and respond to incidents promptly to ensure system uptime and availability.
Qualifications
- Bachelor's degree in Computer Science, Engineering, or related field
- Proven experience (5+ years) as a Site Reliability Engineer, DevOps Engineer, or similar role
- Strong understanding of cloud computing principles and experience with AWS
- Experience of building and supporting complex CI/CD pipelines using Github
- Experience of building and supporting infrastructure as a code using Terraform
- Proficiency in scripting and automating tools
- Solid understanding of networking concepts and protocols
- Understanding of security best practices and experience implementing security controls in cloud environments
- Knowing modern security requirements like SOC2, HIPAA, HITRUST will be a solid advantage.
About us
Classplus is India's largest B2B ed-tech start-up, enabling 1 Lac+ educators and content creators to create their digital identity with their own branded apps. Starting in 2018, we have grown more than 10x in the last year, into India's fastest-growing video learning platform.
Over the years, marquee investors like Tiger Global, Surge, GSV Ventures, Blume, Falcon, Capital, RTP Global, and Chimera Ventures have supported our vision. Thanks to our awesome and dedicated team, we achieved a major milestone in March this year when we secured a “Series-D” funding.
Now as we go global, we are super excited to have new folks on board who can take the rocketship higher🚀. Do you think you have what it takes to help us achieve this? Find Out Below!
What will you do?
· Define the overall process, which includes building a team for DevOps activities and ensuring that infrastructure changes are reviewed from an architecture and security perspective
· Create standardized tooling and templates for development teams to create CI/CD pipelines
· Ensure infrastructure is created and maintained using terraform
· Work with various stakeholders to design and implement infrastructure changes to support new feature sets in various product lines.
· Maintain transparency and clear visibility of costs associated with various product verticals, environments and work with stakeholders to plan for optimization and implementation
· Spearhead continuous experimenting and innovating initiatives to optimize the infrastructure in terms of uptime, availability, latency and costs
You should apply, if you
1. Are a seasoned Veteran: Have managed infrastructure at scale running web apps, microservices, and data pipelines using tools and languages like JavaScript(NodeJS), Go, Python, Java, Erlang, Elixir, C++ or Ruby (experience in any one of them is enough)
2. Are a Mr. Perfectionist: You have a strong bias for automation and taking the time to think about the right way to solve a problem versus quick fixes or band-aids.
3. Bring your A-Game: Have hands-on experience and ability to design/implement infrastructure with GCP services like Compute, Database, Storage, Load Balancers, API Gateway, Service Mesh, Firewalls, Message Brokers, Monitoring, Logging and experience in setting up backups, patching and DR planning
4. Are up with the times: Have expertise in one or more cloud platforms (Amazon WebServices or Google Cloud Platform or Microsoft Azure), and have experience in creating and managing infrastructure completely through Terraform kind of tool
5. Have it all on your fingertips: Have experience building CI/CD pipeline using Jenkins, Docker for applications majorly running on Kubernetes. Hands-on experience in managing and troubleshooting applications running on K8s
6. Have nailed the data storage game: Good knowledge of Relational and NoSQL databases (MySQL,Mongo, BigQuery, Cassandra…)
7. Bring that extra zing: Have the ability to program/script is and strong fundamentals in Linux and Networking.
8. Know your toys: Have a good understanding of Microservices architecture, Big Data technologies and experience with highly available distributed systems, scaling data store technologies, and creating multi-tenant and self hosted environments, that’s a plus
Being Part of the Clan
At Classplus, you’re not an “employee” but a part of our “Clan”. So, you can forget about being bound by the clock as long as you’re crushing it workwise😎. Add to that some passionate people working with and around you, and what you get is the perfect work vibe you’ve been looking for!
It doesn’t matter how long your journey has been or your position in the hierarchy (we don’t do Sirs and Ma’ams); you’ll be heard, appreciated, and rewarded. One can say, we have a special place in our hearts for the Doers! ✊🏼❤️
Are you a go-getter with the chops to nail what you do? Then this is the place for you.
Candidate must have a minimum of 8+ years of IT experience
IST time zone.
candidates should have hands-on experience in
DevOps (GitLab, Artifactory, SonarQube, AquaSec, Terraform & Docker / K8")
Thanks & Regards,
Anitha. K
TAG Specialist
Role Introduction
• This role involves guiding the DevOps team towards successful delivery of Governance and
toolchain initiatives by removing manual tasks.
• Operate toolchain applications to empower engineering teams by providing, reliable, governed
self-service tools and supporting their adoption
• Driving good practice for consumption and utilisation of the engineering toolchain, with a focus
on DevOps practices
• Drive good governance for cloud service consumption
• Involves working in a collaborative environment and focus on leading team and providing
technical leadership to team members.
• Involves setting up process and improvements for teams on supporting various DevOps tooling
and governing the tooling.
• Co-ordinating with multiple teams within organization
• Lead on handovers from architecture teams to support major project rollouts which require the
Toolchain governance DevOps team to operationally support tooling
What you will do
• Identify and implement best practices, process improvement and automation initiatives for
improvement towards quicker delivery by removing manual tasks
• Ensure best practices and process are documented for reusability and keeping up-to date on
good practices and standards.
• Re-usable automation and compliance service, tools and processes
• Support and management of toolchain, toolchain changes and selection
• Identify and implement risk mitigation plans, avoid escalations, resolve blockers for teams.
Toolchain governance will involve operating and responding to alerts, enforcing good tooling
governance by driving automation, remediating technical debt and ensuring the latest tools
are utilised and on the latest versions
• Triage product pipelines, performance issues, SLA/SLO breaches, service unavailable along
with ancillary actions such as providing access to logs, tools, environments.
• Involve in initial / detailed estimates during roadmap planning or feature
estimation/planning of any automation identified for a given toolset.
• Develop, refine, and tune integrations between various tools
• Discuss with Product Owner/team on any challenges from implementation, deployment
perspective and assist in arriving probable solution and escalate any risks to get them
resolved w.r.t DevOps toolchain.
• In consultation with Head of DevOps and other stake holders, prioritization of items, item-
task breakdown; accountable for squad deliverables for sprint
• Involve in reviewing current components and plan for upgrade and ensure its communicated
to wider audience within Organization
• Involve in reviewing access / role and enhance and automate provisioning.
• Identify and encourage areas for growth and improvement within the team e.g conducts
regular 1-2-1’s with squad members to provide support, mentoring and goal setting
• Involve in performance management ,rewards and recognition of team members, Involve in
hiring process.• Plan for upskill of team to know about tools and perform tasks. Ensure quicker onboarding
of new joiners/freshers to team to be productive.
• Review ticket metrics to measure the health of the project including SLAs and plan for
improvement.
• Requirement for on call for critical incidents that happen Out of Hours, based on tooling SLA.
This may include planning standby schedule for squad, carrying out retrospective for every
callout and reviewing SLIs/SLOs.
• Owns the tech/repair debt, risk and compliance for the tooling with respect to
infrastructure, pipelines, access etc
• Track optimum utilization of resources and monitor/track the delivery schedule
• Review solutions designs with the Architects / Principal DevOps Engineers as required
• Provide monthly reporting which align to DevOps Tooling KPIs
What you will have
• Candidate should have 8+ years of experience and Hands-on DevOps experience and
experience in team management.
• Strong communication and interpersonal skills, Team player
• Good working experience of CI/CD tools like Jenkins, SonarQube, FOSSA, Harness, Jira, JSM,
ServiceNow etc.
• Good hands on knowledge of AWS Services like EC2, ECS, S3, IAM, SNS, SQS, VPC, Lambda,
API Gateway, Cloud Watch, Cloud Formation etc.
• Experience in operating and governing DevOps Toolchain
• Experience in operational monitoring, alerting and identifying and delivering on both repair
and technical debt
• Experience and background in ITIL/ITSM processes. The candidate will ensure development
of the appropriate (ITSM) model and processes, based on the ITIL Service Management
framework. This includes the strategic, design, transition, and operation services and
continuous service improvement
• Provide ITSM leadership experience and coaching processes
• Experience on various tools like Jenkins, Harness, Fossa,
• Experience of hosting and managing applications on AWS/AZURE•
• Experience in CI/CD pipeline (Jenkins build pipelines)
• Experience in containerization (Docker/Kubernetes)
• Experience in any programming language (Node.js or Python is preferred)
• Experience in Architecting and supporting cloud based products will be a plus
• Experience in PowerShell & Bash will be a plus
• Able to self manage multiple concurrent small projects, including managing priorities
between projects
• Able to quickly learn new tools
• Should be able to mentor/drive junior team members to achieve desired outcome of
roadmap-
• Ability to analyse information to identify problems and issues, and make effective decisions
within short span
• Excellent problem solving and critical thinking
• Experience in integrating various components including unit testing / CI/CD configuration.
• Experience to review current toolset and plan for upgrade.
• Experience with Agile framework/Jira/JSM tool.• Good communication skills and ability to communicate/work independently with external
teams.
• Highly motivated, able to work proficiently both independently and in a team environment
Good knowledge and experience with security constructs –
Kutumb is the first and largest communities platform for Bharat. We are growing at an exponential trajectory. More than 1 Crore users use Kutumb to connect with their community. We are backed by world-class VCs and angel investors. We are growing and looking for exceptional Infrastructure Engineers to join our Engineering team.
More on this here - https://kutumbapp.com/why-join-us.html">https://kutumbapp.com/why-join-us.html
We’re excited if you have:
- Recent experience designing and building unified observability platforms that enable companies to use the sometimes-overwhelming amount of available data (metrics, logs, and traces) to determine quickly if their application or service is operating as desired
- Expertise in deploying and using open-source observability tools in large-scale environments, including Prometheus, Grafana, ELK (ElasticSearch + Logstash + Kibana), Jaeger, Kiali, and/or Loki
- Familiarity with open standards like OpenTelemetry, OpenTracing, and OpenMetrics
- Familiarity with Kubernetes and Istio as the architecture on which the observability platform runs, and how they integrate and scale. Additionally, the ability to contribute improvements back to the joint platform for the benefit of all teams
- Demonstrated customer engagement and collaboration skills to curate custom dashboards and views, and identify and deploy new tools, to meet their requirements
- The drive and self-motivation to understand the intricate details of a complex infrastructure environment
- Using CICD tools to automatically perform canary analysis and roll out changes after passing automated gates (think Argo & keptn)
- Hands-on experience working with AWS
- Bonus points for knowledge of ETL pipelines and Big data architecture
- Great problem-solving skills & takes pride in your work
- Enjoys building scalable and resilient systems, with a focus on systems that are robust by design and suitably monitored
- Abstracting all of the above into as simple of an interface as possible (like Knative) so developers don't need to know about it unless they choose to open the escape hatch
What you’ll be doing:
- Design and build automation around the chosen tools to make onboarding new services easy for developers (dashboards, alerts, traces, etc)
- Demonstrate great communication skills in working with technical and non-technical audiences
- Contribute new open-source tools and/or improvements to existing open-source tools back to the CNCF ecosystem
Tools we use:
Kops, Argo, Prometheus/ Loki/ Grafana, Kubernetes, AWS, MySQL/ PostgreSQL, Apache Druid, Cassandra, Fluentd, Redis, OpenVPN, MongoDB, ELK
What we offer:
- High pace of learning
- Opportunity to build the product from scratch
- High autonomy and ownership
- A great and ambitious team to work with
- Opportunity to work on something that really matters
- Top of the class market salary and meaningful ESOP ownership
Do Your Thng
DYT - Do Your Thing, is an app, where all social media users can share brands they love with their followers and earn money while doing so! We believe everyone is an influencer. Our aim is to democratise social media and allow people to be rewarded for the content they post. How does DYT help you? It accelerates your career through collaboration opportunities with top brands and gives you access to a community full of experts in the influencer space.
Role: DevOps
Job Description:
We are looking for experienced DevOps Engineers to join our Engineering team. The candidate will be working with our engineers and interact with the tech team for high quality web applications for a product.
Required Experience
- Devops Engineer with 2+ years of experience in development and production operations supporting for Linux & Windows based applications and Cloud deployments (AWS/GC stack)
- Experience working with Continuous Integration and Continuous Deployment Pipeline
- Exposure to managing LAMP stack-based applications
- Experience Resource provisioning automation using tools such as CloudFormation, terraform and ARM Templates.
- Experience in working closely with clients, understanding their requirements, design and implement quality solutions to meet their needs.
- Ability to take ownership on the carried-out work
- Experience coordinating with rest of the team to deliver well-architected and high-quality solutions.
- Experience deploying Docker based applications
- Experience with AWS services.
- Excellent verbal and written communication skills
Desired Experience
- Exposure to AWS, google cloud and Azure Cloud
- Experience in Jenkins, Ansible, Terraform
- Build Monitoring tools and respond to alarms triggered in production environment
- Willingness to quickly become a member of the team and to do what it takes to get the job done
- Ability to work well in a fast-paced environment and listen and learn from stakeholders
- Demonstrate a strong work ethic and incorporate company values in your everyday work.
- Equal Experts is an innovative software delivery consultancy specializing in the delivery of custom software solutions for blue-chip enterprise and public sector clients across a range of industry sectors.
- We deliver market-leading propositions across the digital, online and mobile channels, and are recognized for our leadership in the application of Agile and Lean delivery methods to assure delivery.
- We are focused on hiring DevOps Engineers with skills in AWS, Automation tools, CI/CD tools and the likes.
- The DevOps Consultant will be working with an on-site client team and will be expected to mentor and share their skills and knowledge with the existing client developers.
Your Responsibilities Include:
- Continuous Delivery of quality software created by our project teams. Ensuring smooth build and release with continuous integration.
- Configuration Management - Design and implementation of deployment strategies for multiple projects.
- Virtualization and Infrastructure Provisioning.
- Maintenance and upgrade of our cloud environment (AWS).
- Version control and source code administration.
- Administration of Web Servers, Application Servers and Servlet Containers
- Setting up and managing the Automation efforts for multiple projects.
Technical Expertise:
- Should have worked on Agile projects featuring weekly iterations and releases.
- Should have extensive hands-on experience with:
- Continuous Integration tools like Jenkins.
- Configuration Management tools like Terraform or Ansible.
- Cloud computing using AWS, EC2.
- Hands-on experience on Kubernetes.
- Virtualization tools like Vagrant, Docker or VMWare
- You have an active presence in the DevOps community through your blogs, Stack Overflow and GitHub profiles.
- You are passionate about DevOps. You love to mentor people and evangelize about best practices and innovations in DevOps.


DevOps Engineer Skills Building a scalable and highly available infrastructure for data science Knows data science project workflows Hands-on with deployment patterns for online/offline predictions (server/serverless)
Experience with either terraform or Kubernetes
Experience of ML deployment frameworks like Kubeflow, MLflow, SageMaker Working knowledge of Jenkins or similar tool Responsibilities Owns all the ML cloud infrastructure (AWS) Help builds out an entirely CI/CD ecosystem with auto-scaling Work with a testing engineer to design testing methodologies for ML APIs Ability to research & implement new technologies Help with cost optimizations of infrastructure.
Knowledge sharing Nice to Have Develop APIs for machine learning Can write Python servers for ML systems with API frameworks Understanding of task queue frameworks like Celery

