11+ Biometrics Jobs in Pune | Biometrics Job openings in Pune
Apply to 11+ Biometrics Jobs in Pune on CutShort.io. Explore the latest Biometrics Job opportunities across top companies like Google, Amazon & Adobe.
IT Person – Job Requirements (Restaurant Industry)
1. Technical Skills
POS system setup, troubleshooting, and daily support
Knowledge of billing printers, KOT printers, LAN/WiFi setup
Basic networking (router, switch, IP configuration)
CCTV installation & monitoring
System formatting, software installation, antivirus
Knowledge of cloud backup, email setup (Outlook/Gmail)
Handling biometric attendance machines
Experience with ERP/FnB software (if any)
Basic hardware repairing (CPU, cables, RAM, HDD issues)
2. Restaurant-Specific Skills
Understanding of POS menus, modifiers, kitchen display systems
Printer paper roll change, printer alignment, POS connectivity
Troubleshooting online order integrations (Zomato/Swiggy/Dunzo)
Wi-Fi network stability for restaurant operations
Support during audit days, billing issues, shift closing reports
3. Soft Skills
Quick problem-solving
Good communication
Ability to visit outlets when needed
Ready to work on weekends if urgent issues
Responsible and reliable
4. Experience
1–3 years experience in IT support
Restaurant or retail industry experience preferred
5. Education
Diploma or Degree in IT/Computer Science
Or relevant certification (CCNA, hardware networking, etc.)
JOB DETAILS:
* Job Title: Specialist I - DevOps Engineering
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 7-10 years
* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
Job Description
Job Summary:
As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.
The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.
Key Responsibilities:
- Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
- Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
- Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
- Define migration scope — determine how much history to migrate and plan the repository structure.
- Manage branch renaming and repository organization for optimized post-migration workflows.
- Collaborate with development teams to determine migration points and finalize migration strategies.
- Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.
Required Qualifications:
- Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
- Hands-on experience with P4-Fusion.
- Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
- Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
- Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
- Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
- Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
- Familiarity with CI/CD pipeline integration to validate workflows post-migration.
- Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
- Excellent communication and collaboration skills for cross-team coordination and migration planning.
- Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.
Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools
Must-Haves
Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)
Ask any CIO about corporate data and they’ll happily share all the work they’ve done to make their databases secure and compliant. Ask them about other sensitive information, like contracts, financial documents, and source code, and you’ll probably get a much less confident response. Few organizations have any insight into business-critical information stored in unstructured data.
There was a time when that didn’t matter. Those days are gone. Data is now accessible, copious, and dispersed, and it includes an alarming amount of business-critical information. It’s a target for both cybercriminals and regulators but securing it is incredibly difficult. It’s the data challenge of our generation.
Existing approaches aren’t doing the job. Keyword searches produce a bewildering array of possibly relevant documents that may or may not be business critical. Asking users to categorize documents requires extensive training and constant vigilance to make sure users are doing their part. What’s needed is an autonomous solution that can find and assess risk so you can secure your unstructured data wherever it lives.
That’s our mission. Concentric’s semantic intelligence solution reveals the meaning in your structured and unstructured data so you can fight off data loss and meet compliance and privacy mandates.
Check out our core cultural values and behavioural tenets here: https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/" target="_blank">https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/
Title: Cloud DevOps Engineer
Role: Individual Contributor (4-8 yrs)
Requirements:
- Energetic self-starter, a fast learner, with a desire to work in a startup environment
- Experience working with Public Clouds like AWS
- Operating and Monitoring cloud infrastructure on AWS.
- Primary focus on building, implementing and managing operational support
- Design, Develop and Troubleshoot Automation scripts (Configuration/Infrastructure as code or others) for Managing Infrastructure.
- Expert at one of the scripting languages – Python, shell, etc
- Experience with Nginx/HAProxy, ELK Stack, Ansible, Terraform, Prometheus-Grafana stack, etc
- Handling load monitoring, capacity planning, and services monitoring.
- Proven experience With CICD Pipelines and Handling Database Upgrade Related Issues.
- Good Understanding and experience in working with Containerized environments like Kubernetes and Datastores like Cassandra, Elasticsearch, MongoDB, etc
- Hands-on knowledge on various CI-CD tools (Jenkins/TeamCity, Artifactory, UCD, Bitbucket/Github, SonarQube) including setting up of build-deployment automated pipelines.
- Very good knowledge in scripting tools and languages such as Shell, Perl or Python , YAML/Groovy, build tools such as Maven/Gradle.
- Hands-on knowledge in containerization and orchestration tools such as Docker, OpenShift and Kubernetes.
- Good knowledge in configuration management tools such as Ansible, Puppet/Chef and have worked on setting up of monitoring tools (Splunk/Geneos/New Relic/Elk).
- Expertise in job schedulers/workload automation tools such as Control-M or AutoSys is good to have.
- Hands-on knowledge on Cloud technology (preferably GCP) including various computing services and infrastructure setup using Terraform.
- Should have basic understanding on networking, certificate management, Identity and Access Management and Information security/encryption concepts.
- • Should support day-to-day tasks related to platform and environments upkeep such as upgrades, patching, migration and system/interfaces integration.
- • Should have experience in working in Agile based SDLC delivery model, multi-task and support multiple systems/apps.
- • Big-data and Hadoop ecosystem knowledge is good to have but not mandatory.
- Should have worked on standard release, change and incident management tools such as ServiceNow/Remedy or similar
Intuitive is the fastest growing top-tier Cloud Solutions and Services company supporting Global Enterprise Customer across Americas, Europe and Middle East.
Intuitive is looking for highly talented hands-on Cloud Infrastructure Architects to help accelerate our growing Professional Services consulting Cloud & DevOps practice. This is an excellent opportunity to join Intuitive’s global world class technology teams, working with some of the best and brightest engineers while also developing your skills and furthering your career working with some of the largest customers.
Job Description :
- Extensive exp. with K8s (EKS/GKE) and k8s eco-system tooling e,g., Prometheus, ArgoCD, Grafana, Istio etc.
- Extensive AWS/GCP Core Infrastructure skills
- Infrastructure/ IAC Automation, Integration - Terraform
- Kubernetes resources engineering and management
- Experience with DevOps tools, CICD pipelines and release management
- Good at creating documentation(runbooks, design documents, implementation plans )
Linux Experience :
- Namespace
- Virtualization
- Containers
Networking Experience
- Virtual networking
- Overlay networks
- Vxlans, GRE
Kubernetes Experience :
Should have experience in bringing up the Kubernetes cluster manually without using kubeadm tool.
Observability
Experience in observability is a plus
Cloud automation :
Familiarity with cloud platforms exclusively AWS, DevOps tools like Jenkins, terraform etc.
Experience : 8 to 9 Years.
Location : Pune / Nasik or willing to relocate to these locations.
Notice Period : Immediate or Within a week.
Who we are :
- Winjit is a global, agile, and innovative organization established with an aim to create value and impact for its employees, customers and all stakeholders.
- We are a group of technology enthusiasts with a mission to make a positive impact on businesses by leveraging our expertise in Artificial Intelligence, Machine Learning, IoT and Blockchain.
Who you are :
- Growth oriented, a tech geek with a curiosity to know and explore.
- A Solution oriented individual.
- A Strong Decision maker.
- A Detail oriented, obsessed for pixel perfection and ability to convert a designer's vision to a working application.
- Someone who aspires to travel globally to solve exciting business challenges.
What you possess :
- Relevant experience in architecting, designing, and implementing Microsoft Azure private, public and hybrid cloud-based solutions.
- Broad understanding of cloud computing technologies, business drivers, and emerging computing trends.
- Knowledge of and experience in Microsoft Azure.
- Knowledge or good understanding of Cloud Services & Concepts (SaaS, PaaS, IaaS).
- Recent experience in working with CI/CD processes and tooling from code repositories such as GitLab/GitHub, continuous integration.
- Strong scripting abilities (PowerShell) and ability to use APIs to automate tooling, delivery pipelines and operational support.
- Keen interest in future technologies and the strategic direction of software and systems delivery to continually improve the service offered by SaaS plaza.
- Experience in application assessment to determine the best fit for cloud services.
- Good communication, presenting, collaboration and stakeholder management skills.
What you will do :
- Own & lead the technical customer engagement, assessments, architectural design sessions, improvement sessions on existing architectures, implementation projects, proofs of concepts, etc.
- Work in project teams to architect, develop and deploy new cloud solutions for customers.
- Optimize the deployment of cloud solutions using DevOps tooling.
- Embed, automate operational tasks on cloud solutions.
- Provide 2nd & 3rd line support on delivered cloud solutions.
- Build and support DevOps CI/CD automation to implement cloud infrastructure that enables engineers to self-serve for most operational tasks and achieve rapid change. Education & experience.
- Develop strategies and best practices for efficient development.
- Mentor the team and lead them towards growth.
With hands on experience in :
- Microsoft Azure is a must.
- CI/CD Pipeline creation.
- Powershell scripting.
- ARM Templates.
- Terraforms.
Why Winjit :
- Winjit understands that our team has a life outside work as well.
We offer :
- Flexible working culture.
- Holistic development of its team members through its wellness and growth-oriented programs.
- Opportunities to upskill and cross-skill through advanced technical certifications.
- Global on-site exposure.
- Be a musician, stand-up comic, writer, athlete while working at Winjit as value your passion !
One of our US based client is looking for a Devops professional who can handle Technical as well as Trainings for them in US.
If you are hired, you will be sent to US for the working from there. Training & Technical work ratio will be 70% & 30% respectively.
Company Will sponsor for US Visa.
If you are an Experienced Devops professional and also given professional trainings then feel free to connect with us for more.
Implement integrations requested by customers
Deploy updates and fixes
Provide Level 2 technical support
Build tools to reduce occurrences of errors and improve customer experience
Develop software to integrate with internal back-end systems
Perform root cause analysis for production errors
Investigate and resolve technical issues
Develop scripts to automate visualization
Design procedures for system troubleshooting and maintenance
Multiple Clouds [AWS/Azure/GCP] hands on experience
Good Experience on Docker implementation at scale.
Kubernets implementation and orchestration.
We are seeking a passionate DevOps Engineer to help create the next big thing in data analysis and search solutions.
You will join our Cloud infrastructure team supporting our developers . As a DevOps Engineer, you’ll be automating our environment setup and developing infrastructure as code to create a scalable, observable, fault-tolerant and secure environment. You’ll incorporate open source tools, automation, and Cloud Native solutions and will empower our developers with this knowledge.
We will pair you up with world-class talent in cloud and software engineering and provide a position and environment for continuous learning.
We are looking for people with programming skills in Python, SQL, Cloud Computing. Candidate should have experience in at least one of the major cloud-computing platforms - AWS/Azure/GCP. He should professioanl experience in handling applications and databases in the cloud using VMs and Docker images. He should have ability to design and develop applications for the cloud.
You will be responsible for
- Leading the DevOps strategy and development of SAAS Product Deployments
- Leading and mentoring other computer programmers.
- Evaluating student work and providing guidance in the online courses in programming and cloud computing.
Desired experience/skills
Qualifications: Graduate degree in Computer Science or related field, or equivalent experience.
Skills:
- Strong programming skills in Python, SQL,
- Cloud Computing
Experience:
2+ years of programming experience including Python, SQL, and Cloud Computing. Familiarity with command line working environment.
Note: A strong programming background, in any language and cloud computing platform is required. We are flexible about the degree of familiarity needed for the specific environments Python, SQL. If you have extensive experience in one of the cloud computing platforms and less in others you should still, consider applying.
Soft Skills:
- Good interpersonal, written, and verbal communication skills; including the ability to explain the concepts to others.
- A strong understanding of algorithms and data structures, and their performance characteristics.
- Awareness of and sensitivity to the educational goals of a multicultural population would also be desirable.
- Detail oriented and well organized.
- JD: • 10+ years of overall industry experience
• 5+ years of cloud experience
• 2+ years of architect experience
• Varied background preferred between systems and development
o Experience working with applications, not pure infra experience
• Azure experience – strong background using Azure for application migrations
• Terraform experience – should mention automation technologies in job experience
• Hands on experience delivering in the cloud
• Must have job experience designing solutions for customers
• IaaS Cloud architect
workload migrations to AWS and/or Azure
• Security architecture considerations experience
• CI/CD experience
• Proven applications migration track of record.



