11+ IEEE 802.1X Jobs in Pune | IEEE 802.1X Job openings in Pune
Apply to 11+ IEEE 802.1X Jobs in Pune on CutShort.io. Explore the latest IEEE 802.1X Job opportunities across top companies like Google, Amazon & Adobe.

Primary Skills – SDA, DNAC, Cisco ISE, Routing and switching and Troubleshooting NAC, 802.1x, Supplicant configuration,
Switching Skills- IOS upgrade, SNMP, VLAN, STP, VSL
Routing Skills-OSPF, BGP, ISIS
Network Monitoring Tools- Cisco Prime (Integration, monitoring, and troubleshooting)
- Must have hands on experience in Implementing NAC,802.1x (Wired & Wireless)
- Strong knowledge on CAMPUS LAN Architecture and implementation
- Hands on experience in Routing, Switching, Cisco SDA, DNAC, Cisco ISE, Assurance
- Working experience in Cisco prime infra, Monitoring, Integration and Heat map generation
- Should have detailed technical understanding, troubleshooting of STP protocols (STP, PVST, RSTP, MSTP)
- Perform IOS upgrades on Switches and WLCs
- Troubleshoot skills on Quality of Service, Multicast and HSRP, Dot1x protocols, IPSLA
- Troubleshooting skills on Cisco VSS, VSL and stacking technologies
- Should have detailed technical understanding, troubleshooting and support of, Routing protocols (OSPF, BGP), MPLS in an enterprise environment.
- Perform root cause and troubleshoot network outages
- Should be proficient in Wireless technology, implementation, and troubleshooting
- Deliver reports on the actions performed.
- Hands-on experience with the networking products Like Cisco 6800, 9500,9400,9300, 3800, 2960x and 3650 switches. Juniper EX8200,EX4200,EX3300 and EX2200 switches
- Strong ability to troubleshoot complex network issues and Identify, diagnose, and resolve different types of network problems.
Secondary Skills –
- CCNP Certification
- Project Management: Managing individual network projects within the scope of the Network Team
- Excellent technical and business communication skills, both oral and written


We are seeking a highly skilled and motivated MLOps Engineer with 3-5 years of experience to join our engineering team. The ideal candidate should possess a strong foundation in DevOps or software engineering principles with practical exposure to machine learning operational workflows. You will be instrumental in operationalizing ML systems, optimizing the deployment lifecycle, and strengthening the integration between data science and engineering teams.
Required Skills:
• Hands-on experience with MLOps platforms such as MLflow and Kubeflow.
• Proficiency in Infrastructure as Code (laC) tools like Terraform or Ansible.
• Strong familiarity with monitoring and alerting frameworks (Prometheus, Grafana, Datadog, AWS CloudWatch).
• Solid understanding of microservices architecture, service discovery, and load balancing.
• Excellent programming skills in Python, with experience in writing modular, testable, and maintainable code.
• Proficient in Docker and container-based application deployments.
• Experience with CI/CD tools such as Jenkins or GitLab Cl.
• Basic working knowledge of Kubernetes for container orchestration.
• Practical experience with cloud-based ML platforms such as AWS SageMaker, Databricks, or Google Vertex Al.
Good-to-Have Skills:
• Awareness of security practices specific to ML pipelines, including secure model endpoints and data protection.
• Experience with scripting languages like Bash or PowerShell for automation tasks.
• Exposure to database scripting and data integration pipelines.
Experience & Qualifications:
• 3-5+ years of experience in MLOps, Site Reliability Engineering (SRE), or
Software Engineering roles.
• At least 2+ years of hands-on experience working on ML/Al systems in production settings.
Job Title: Senior AIML Engineer – Immediate Joiner (AdTech)
Location: Pune – Onsite
About Us:
We are a cutting-edge technology company at the forefront of digital transformation, building innovative AI and machine learning solutions for the digital advertising industry. Join us in shaping the future of AdTech!
Role Overview:
We are looking for a highly skilled Senior AIML Engineer with AdTech experience to develop intelligent algorithms and predictive models that optimize digital advertising performance. Immediate joiners preferred.
Key Responsibilities:
- Design and implement AIML models for real-time ad optimization, audience targeting, and campaign performance analysis.
- Collaborate with data scientists and engineers to build scalable AI-driven solutions.
- Analyze large volumes of data to extract meaningful insights and improve ad performance.
- Develop and deploy machine learning pipelines for automated decision-making.
- Stay updated on the latest AI/ML trends and technologies to drive continuous innovation.
- Optimize existing models for speed, scalability, and accuracy.
- Work closely with product managers to align AI solutions with business goals.
Requirements:
- Minimum 4-6 years of experience in AIML, with a focus on AdTech (Mandatory).
- Strong programming skills in Python, R, or similar languages.
- Hands-on experience with machine learning frameworks like TensorFlow, PyTorch, or Scikit-learn.
- Expertise in data processing and real-time analytics.
- Strong understanding of digital advertising, programmatic platforms, and ad server technology.
- Excellent problem-solving and analytical skills.
- Immediate joiners preferred.
Preferred Skills:
- Knowledge of big data technologies like Spark, Hadoop, or Kafka.
- Experience with cloud platforms like AWS, GCP, or Azure.
- Familiarity with MLOps practices and tools.
How to Apply:
If you are a passionate AIML engineer with AdTech experience and can join immediately, we want to hear from you. Share your resume and a brief note on your relevant experience.
Join us in building the future of AI-driven digital advertising!
Company - Apptware Solutions
Location Baner Pune
Team Size - 130+
Job Description -
Cloud Engineer with 8+yrs of experience
Roles and Responsibilities
● Have 8+ years of strong experience in deployment, management and maintenance of large systems on-premise or cloud
● Experience maintaining and deploying highly-available, fault-tolerant systems at scale
● A drive towards automating repetitive tasks (e.g. scripting via Bash, Python, Ruby, etc)
● Practical experience with Docker containerization and clustering (Kubernetes/ECS)
● Expertise with AWS (e.g. IAM, EC2, VPC, ELB, ALB, Autoscaling, Lambda, VPN)
● Version control system experience (e.g. Git)
● Experience implementing CI/CD (e.g. Jenkins, TravisCI, CodePipeline)
● Operational (e.g. HA/Backups) NoSQL experience (e.g. MongoDB, Redis) SQL experience (e.g. MySQL)
● Experience with configuration management tools (e.g. Ansible, Chef) ● Experience with infrastructure-as-code (e.g. Terraform, Cloudformation)
● Bachelor's or master’s degree in CS, or equivalent practical experience
● Effective communication skills
● Hands-on cloud providers like MS Azure and GC
● A sense of ownership and ability to operate independently
● Experience with Jira and one or more Agile SDLC methodologies
● Nice to Have:
○ Sensu and Graphite
○ Ruby or Java
○ Python or Groovy
○ Java Performance Analysis
Role: Cloud Engineer
Industry Type: IT-Software, Software Services
Functional Area: IT Software - Application Programming, Maintenance Employment Type: Full Time, Permanent
Role Category: Programming & Design

Ask any CIO about corporate data and they’ll happily share all the work they’ve done to make their databases secure and compliant. Ask them about other sensitive information, like contracts, financial documents, and source code, and you’ll probably get a much less confident response. Few organizations have any insight into business-critical information stored in unstructured data.
There was a time when that didn’t matter. Those days are gone. Data is now accessible, copious, and dispersed, and it includes an alarming amount of business-critical information. It’s a target for both cybercriminals and regulators but securing it is incredibly difficult. It’s the data challenge of our generation.
Existing approaches aren’t doing the job. Keyword searches produce a bewildering array of possibly relevant documents that may or may not be business critical. Asking users to categorize documents requires extensive training and constant vigilance to make sure users are doing their part. What’s needed is an autonomous solution that can find and assess risk so you can secure your unstructured data wherever it lives.
That’s our mission. Concentric’s semantic intelligence solution reveals the meaning in your structured and unstructured data so you can fight off data loss and meet compliance and privacy mandates.
Check out our core cultural values and behavioural tenets here: https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/" target="_blank">https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/
Title: Cloud DevOps Engineer
Role: Individual Contributor (4-8 yrs)
Requirements:
- Energetic self-starter, a fast learner, with a desire to work in a startup environment
- Experience working with Public Clouds like AWS
- Operating and Monitoring cloud infrastructure on AWS.
- Primary focus on building, implementing and managing operational support
- Design, Develop and Troubleshoot Automation scripts (Configuration/Infrastructure as code or others) for Managing Infrastructure.
- Expert at one of the scripting languages – Python, shell, etc
- Experience with Nginx/HAProxy, ELK Stack, Ansible, Terraform, Prometheus-Grafana stack, etc
- Handling load monitoring, capacity planning, and services monitoring.
- Proven experience With CICD Pipelines and Handling Database Upgrade Related Issues.
- Good Understanding and experience in working with Containerized environments like Kubernetes and Datastores like Cassandra, Elasticsearch, MongoDB, etc
You need to drive automation for implementing scalable and robust applications. You would indulge your dedication and passion to build server-side optimization ensuring low-latency and high-end performance for the cloud deployed within datacentre. You should have sound knowledge of Open stack and Kubernetes domain.
YOUR ‘OKR’ SUMMARY
OKR means Objective and Key Results.
As a DevOps Engineer, you will understand the overall movement of data in the entire platform, find bottlenecks,define solutions, develop key pieces, write APIs and own deployment of those. You will work with internal and external development teams to discover these opportunities and to solve hard problems. You will also guide engineers in solving the complex problems, developing your acceptance tests for those and reviewing the work and
the test results.
What you will do
- As a DevOps Engineer responsible for systems being used by customer across the globe.
- Set the goals for overall system and divide into goals for the sub-system.
- Guide/motivate/convince/mentor the architects on sub-systems and help them achieving improvements with agility and speed.
- Identify the performance bottleneck and come up with the solution to optimize time and cost taken by build/test system.
- Be a thought leader to contribute to the capacity planning for software/hardware, spanning internal and public cloud, solving the trade-off between turnaround time and utilization.
- Bring in technologies enabling massively parallel systems to improve turnaround time by an order of magnitude.
What you will need
A strong sense of ownership, urgency, and drive. As an integral part of the development team, you will need the following skills to succeed.
- BS or BE/B.Tech or equivalent experience in EE/CS with 10+ years of experience.
- Strong background of Architecting and shipping distributed scalable software product with good understanding of system programming.
- Excellent background of Cloud technologies like: OpenStack, Docker, Kubernetes, Ansible, Ceph is must.
- Excellent understanding of hybrid, multi-cloud architecture and edge computing concepts.
- Ability to identify the bottleneck and come up with solution to optimize it.
- Programming and software development skills in Python, Shell-script along with good understanding of distributed systems and REST APIs.
- Experience in working with SQL/NoSQL database systems such as MySQL, MongoDB or Elasticsearch.
- Excellent knowledge and working experience with Docker containers and Virtual Machines.
- Ability to effectively work across organizational boundaries to maximize alignment and productivity between teams.
- Ability and flexibility to work and communicate effectively in a multi-national, multi-time-zone corporate.
Additional Advantage:
- Deep understanding of technology and passionate about what you do.
- Background in designing high performant scalable software systems with strong focus to optimizehardware cost.
- Solid collaborative and interpersonal skills, specifically a proven ability to effectively guide andinfluence within a dynamic environment.
- Strong commitment to get the most performance out of a system being worked on.
- Prior development of a large software project using service-oriented architecture operating with real time constraints.
What's In It for You?
- You will get a chance to work on cloud-native and hyper-scale products
- You will be working with industry leaders in cloud.
- You can expect a steep learning curve.
- You will get the experience of solving real time problems, eventually you become a problem solver.
Benefits & Perks:
- Competitive Salary
- Health Insurance
- Open Learning - 100% Reimbursement for online technical courses.
- Fast Growth - opportunities to grow quickly and surely
- Creative Freedom + Flat hierarchy
- Sponsorship to all those employees who represent company in events and meet ups.
- Flexible working hours
- 5 days week
- Hybrid Working model (Office and WFH)
Our Hiring Process:
Candidates for this position can expect the hiring process as follows (subject to successful clearing of every round)
- Initial Resume screening call with our Recruiting team
- Next, candidates will be invited to solve coding exercises.
- Next, candidates will be invited for first technical interview
- Next, candidates will be invited for final technical interview
- Finally, candidates will be invited for Culture Plus interview with HR
- Candidates may be asked to interview with the Leadership team
- Successful candidates will subsequently be made an offer via email
As always, the interviews and screening call will be conducted via a mix of telephonic and video call.
So, if you are looking at an opportunity to really make a difference- make it with us…
Coredge.io provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable central, state or local laws.
- Hands-on knowledge on various CI-CD tools (Jenkins/TeamCity, Artifactory, UCD, Bitbucket/Github, SonarQube) including setting up of build-deployment automated pipelines.
- Very good knowledge in scripting tools and languages such as Shell, Perl or Python , YAML/Groovy, build tools such as Maven/Gradle.
- Hands-on knowledge in containerization and orchestration tools such as Docker, OpenShift and Kubernetes.
- Good knowledge in configuration management tools such as Ansible, Puppet/Chef and have worked on setting up of monitoring tools (Splunk/Geneos/New Relic/Elk).
- Expertise in job schedulers/workload automation tools such as Control-M or AutoSys is good to have.
- Hands-on knowledge on Cloud technology (preferably GCP) including various computing services and infrastructure setup using Terraform.
- Should have basic understanding on networking, certificate management, Identity and Access Management and Information security/encryption concepts.
- • Should support day-to-day tasks related to platform and environments upkeep such as upgrades, patching, migration and system/interfaces integration.
- • Should have experience in working in Agile based SDLC delivery model, multi-task and support multiple systems/apps.
- • Big-data and Hadoop ecosystem knowledge is good to have but not mandatory.
- Should have worked on standard release, change and incident management tools such as ServiceNow/Remedy or similar
4 – 6 years of application development with design, development, implementation, and
support experience, including the following:
o C#
o JavaScript
o HTML
o SQL
o Messaging/RabbitMQ
o Asynchronous communication patterns
Experience with Visual Studio and Git
A working understanding of build and release automation, preferably with Azure DevOps
Excellent understanding of object-oriented concepts and .Net framework
Experience in creating reusable libraries in C#
Ability to troubleshoot and isolate/solve complex bugs, connectivity issues, or OS related
issues
Ability to write complex SQL queries and stored procedures in Oracle and/or MS SQL
Proven ability to use design patterns to accomplish scalable architecture
Understanding of event-driven architecture
Experience with message brokers such as RabbitMQ
Experience in the development of REST APIs
Understanding of basic steps of an Agile SDLC
Excellent communication (both written and verbal) and interpersonal skills
Demonstrated accountability and ownership of assigned tasks
Demonstrated leadership and ability to work as a leader on large and complex tasks
About the Role
- Own the end-to-end infrastructure of Sibros Cloud
- Define and introduce security best practices, identify gaps in infrastructure and come up with solutions
- Design and implement tools and software to manage Sibros’ infrastructure
- Stay hands-on, write and review code and documentation, debug and root cause issues in production environment
Minimum Qualifications
- Experience in Infrastructure as Code (IaC) to manage multi-cloud environments using cloud agnostic tools like Terraform or Ansible
- Passionate about security and have good understanding of industry best practices
- Experience in programming languages like Python, Golang, and enjoying automating everything using code
- Good skills and intuition on root cause issues in production environment
Preferred Qualifications
- Experience in database and network management
- Experience in defining security policies and best practices
- Experience in managing a large scale multi cloud environment
- Knowledge of SOC, GDPR or ISO 27001 security compliance standards is a plus
Equal Employment Opportunity
Sibros is committed to a policy of equal employment opportunity. We recruit, employ, train, compensate, and promote without regard to race, color, age, sex, ancestry, marital status, religion, national origin, disability, sexual orientation, veteran status, present or past history of mental disability, genetic information or any other classification protected by state or federal law.
Job Brief:
We are looking for candidates that have experience in development and have performed CI/CD based projects. Should have a good hands-on Jenkins Master-Slave architecture, used AWS native services like CodeCommit, CodeBuild, CodeDeploy and CodePipeline. Should have experience in setting up cross platform CI/CD pipelines which can be across different cloud platforms or on-premise and cloud platform.
Job Location:
Pune.
Job Description:
- Hands on with AWS (Amazon Web Services) Cloud with DevOps services and CloudFormation.
- Experience interacting with customer.
- Excellent communication.
- Hands-on in creating and managing Jenkins job, Groovy scripting.
- Experience in setting up Cloud Agnostic and Cloud Native CI/CD Pipelines.
- Experience in Maven.
- Experience in scripting languages like Bash, Powershell, Python.
- Experience in automation tools like Terraform, Ansible, Chef, Puppet.
- Excellent troubleshooting skills.
- Experience in Docker and Kuberneties with creating docker files.
- Hands on with version control systems like GitHub, Gitlab, TFS, BitBucket, etc.
Mandatory Skills Sets
- Excellent problem-solving skills in technical challenges
- Deep knowledge of at least one cloud platform (AWS Preferred)
- Understanding of Latest cloud computing technologies
- Experience in architecting solutions based on knowledge of infrastructure & application architectures including the integration approaches
- Complete hands-on with ability to grasp evolving technologies and coding languages
- Excellent communication skills which would involve customer facing role
- Design thinking
- Customer facing skills and strong technical capabilities to review the teams work as well as guide the team
- Experience working/building/contributing to proposals for architecture, estimations
Preferred Skills Sets
- Experience architecting infrastructure solutions using both Linux/Unix and Windows with specific recommendations on server, load balancing, HA/DR, & storage architectures.
- Experience architecting or deploying Cloud/Virtualization solutions in enterprise customers.
- Person must have performed Application Architect Role for 3+ years
- AWS platform specific experience a bonus.
- Enterprise application and database architecture a bonus.