11+ IT administration Jobs in Bangalore (Bengaluru) | IT administration Job openings in Bangalore (Bengaluru)
Apply to 11+ IT administration Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest IT administration Job opportunities across top companies like Google, Amazon & Adobe.
🖥️ Job Title: IT Administrator
📍 Location: Bangalore
🕒 Experience Required: 3–5 Years
💼 Employment Type: Full-time
🏥 About the Company
Connect and Heal is a leading digital healthcare platform delivering comprehensive health solutions through technology. We are driven by innovation, efficiency, and patient-centric care.
🔎 Role Overview
We are looking for a proactive and technically sound IT Administrator to manage and maintain the organization’s IT infrastructure. The ideal candidate will ensure smooth day-to-day IT operations, system security, and high availability of technology resources.
✅ Key Responsibilities
- Manage and maintain IT infrastructure including servers, desktops, laptops, printers, and network devices
- Install, configure, and troubleshoot hardware, software, operating systems, and applications
- Monitor system performance and ensure uptime and reliability
- Handle user access management, email setup, and security permissions
- Manage LAN, WAN, Wi-Fi, VPN, and firewall configurations
- Ensure data security, backup, and disaster recovery processes
- Coordinate with vendors for hardware procurement, AMC, and support services
- Provide L1/L2 technical support to employees
- Maintain IT asset inventory and documentation
- Ensure compliance with IT policies and cybersecurity standards
- Support onboarding and offboarding with system access and asset allocation
🎯 Required Skills & Qualifications
- Bachelor’s degree in Computer Science, IT, or related field
- 3–5 years of proven experience as an IT Administrator or System Administrator
- Strong knowledge of Windows OS, Linux basics, and networking concepts
- Hands-on experience with Active Directory, Office 365, and endpoint security
- Knowledge of firewalls, antivirus, data backup, and recovery systems
- Experience with cloud platforms (AWS/Azure – basic level preferred)
- Strong troubleshooting and problem-solving skills
- Excellent communication and coordination skills
⭐ Preferred Skills
- ITIL certification or similar
- Experience in healthcare IT environment (added advantage)
- Exposure to cyber security tools and data privacy practices
💼 Why Join Connect and Heal?
- Work with a fast-growing health-tech organization
- Opportunity to handle enterprise-level IT infrastructure
- Collaborative and innovative work environment
- Career growth and continuous learning opportunities
Requirements:
- Proficiency in Java fundamentals and coding.
- Proficiency in front-end technologies such as Angular and React.
- Experience with backend frameworks like Spring Boot (mandatory).
- Previous experience as a Java Full Stack trainer.
- Excellent communication skills.
- Ability to work independently and remotely (for candidates outside Kochi, Kerala).
- Willingness to work onsite in our office (for Kochi residents).
Preferred Qualifications:
- Certification in Java.
- Experience with Agile methodologies.
- Big data developer with 8+ years of professional IT experience with expertise in Hadoop ecosystem components in ingestion, Data modeling, querying, processing, storage, analysis, Data Integration and Implementing enterprise level systems spanning Big Data.
- A skilled developer with strong problem solving, debugging and analytical capabilities, who actively engages in understanding customer requirements.
- Expertise in Apache Hadoop ecosystem components like Spark, Hadoop Distributed File Systems(HDFS), HiveMapReduce, Hive, Sqoop, HBase, Zookeeper, YARN, Flume, Pig, Nifi, Scala and Oozie.
- Hands on experience in creating real - time data streaming solutions using Apache Spark core, Spark SQL & DataFrames, Kafka, Spark streaming and Apache Storm.
- Excellent knowledge of Hadoop architecture and daemons of Hadoop clusters, which include Name node,Data node, Resource manager, Node Manager and Job history server.
- Worked on both Cloudera and Horton works in Hadoop Distributions. Experience in managing Hadoop clustersusing Cloudera Manager tool.
- Well versed in installation, Configuration, Managing of Big Data and underlying infrastructure of Hadoop Cluster.
- Hands on experience in coding MapReduce/Yarn Programs using Java, Scala and Python for analyzing Big Data.
- Exposure to Cloudera development environment and management using Cloudera Manager.
- Extensively worked on Spark using Scala on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL/Oracle .
- Implemented Spark using PYTHON and utilizing Data frames and Spark SQL API for faster processing of data and handled importing data from different data sources into HDFS using Sqoop and performing transformations using Hive, MapReduce and then loading data into HDFS.
- Used Spark Data Frames API over Cloudera platform to perform analytics on Hive data.
- Hands on experience in MLlib from Spark which are used for predictive intelligence, customer segmentation and for smooth maintenance in Spark streaming.
- Experience in using Flume to load log files into HDFS and Oozie for workflow design and scheduling.
- Experience in optimizing MapReduce jobs to use HDFS efficiently by using various compression mechanisms.
- Working on creating data pipeline for different events of ingestion, aggregation, and load consumer response data into Hive external tables in HDFS location to serve as feed for tableau dashboards.
- Hands on experience in using Sqoop to import data into HDFS from RDBMS and vice-versa.
- In-depth Understanding of Oozie to schedule all Hive/Sqoop/HBase jobs.
- Hands on expertise in real time analytics with Apache Spark.
- Experience in converting Hive/SQL queries into RDD transformations using Apache Spark, Scala and Python.
- Extensive experience in working with different ETL tool environments like SSIS, Informatica and reporting tool environments like SQL Server Reporting Services (SSRS).
- Experience in Microsoft cloud and setting cluster in Amazon EC2 & S3 including the automation of setting & extending the clusters in AWS Amazon cloud.
- Extensively worked on Spark using Python on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL.
- Strong experience and knowledge of real time data analytics using Spark Streaming, Kafka and Flume.
- Knowledge in installation, configuration, supporting and managing Hadoop Clusters using Apache, Cloudera (CDH3, CDH4) distributions and on Amazon web services (AWS).
- Experienced in writing Ad Hoc queries using Cloudera Impala, also used Impala analytical functions.
- Experience in creating Data frames using PySpark and performing operation on the Data frames using Python.
- In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS and MapReduce Programming Paradigm, High Availability and YARN architecture.
- Establishing multiple connections to different Redshift clusters (Bank Prod, Card Prod, SBBDA Cluster) and provide the access for pulling the information we need for analysis.
- Generated various kinds of knowledge reports using Power BI based on Business specification.
- Developed interactive Tableau dashboards to provide a clear understanding of industry specific KPIs using quick filters and parameters to handle them more efficiently.
- Well Experience in projects using JIRA, Testing, Maven and Jenkins build tools.
- Experienced in designing, built, and deploying and utilizing almost all the AWS stack (Including EC2, S3,), focusing on high-availability, fault tolerance, and auto-scaling.
- Good experience with use-case development, with Software methodologies like Agile and Waterfall.
- Working knowledge of Amazon's Elastic Cloud Compute( EC2 ) infrastructure for computational tasks and Simple Storage Service ( S3 ) as Storage mechanism.
- Good working experience in importing data using Sqoop, SFTP from various sources like RDMS, Teradata, Mainframes, Oracle, Netezza to HDFS and performed transformations on it using Hive, Pig and Spark .
- Extensive experience in Text Analytics, developing different Statistical Machine Learning solutions to various business problems and generating data visualizations using Python and R.
- Proficient in NoSQL databases including HBase, Cassandra, MongoDB and its integration with Hadoop cluster.
- Hands on experience in Hadoop Big data technology working on MapReduce, Pig, Hive as Analysis tool, Sqoop and Flume data import/export tools.
MODE OF INTERVIEW - FACE TO FACE
MODE - WORK FROM OFFICE
LOCATION - BANGALORE
JOB DESCRIPTION -
At least 5+ years’ experience working in the IT industry
- 4+ years of experience developing custom applications on the Salesforce development environment including Force.com IDE, Migration Tools, SOSL, SOQL and Web Services
- Solid understanding of Salesforce, such as configuration, workflows, triggers, managing custom objects, fields, formulas, Force.com IDE, Migration Tools, SOSL, SOQL, Web Services, Lighting Enablement, Aura framework and Lightning web components.
- Ability to work in a fast-paced setting and prioritize among competing tasks and assignments
- Must be a self-starter, with the ability to work independently without constant supervision and micromanagement
- Must be able to work flexible hours to collaborate with remote IT and business team members • Excellent analytical, organizational and problem-solving skills • Excellent written and oral communication skills • Strong customer service mentality
Job Type: Full-time
About Plum
We are making health insurance - simple, accessible and affordable. Hundreds of businesses of all sizes from startups to large corporates trust Plum for their employee's health protection.
Healthcare in India is seeing a phenomenal shift. Healthcare cost is seeing an inflation that is three times the general inflation. Treatments of diseases including Covid-19 can wipe out entire household savings. Majority of Indians won't be able to afford a health insurance on their own. As many as 600mn Indians would depend on employer-sponsored insurance.
Enter Plum. Plum is re-imagining the health insurance stack, and accelerating the penetration of health insurance in India to 100%. Plum has forged new underwriting and fraud detection algorithms to enable companies as small as 2 to benefit from a group insurance. The platform enables realtime insurance design & pricing to enable companies to buy insurance in 3-clicks. And offers employees a hassle-free claims experience through an integrated digital process.
Plum is backed by leading global investors including Tiger Global, Sequoia Capital, Tanglin Ventures and Incubate Fund (read more https://www.plumhq.com/blog/plum-series-a-funding">here)
Role and Responsibilities
- Design, build, and maintain high performance, reusable, and reliable Java code.
- Translate designs and wireframes into high quality components and modular code, ensuring the best possible performance, quality, and responsiveness of the application
- Strong advocate of Google’s Android design principles and interface guidelines
- Strong sense of ownership and integrity demonstrated through clear communication and collaboration.Having experience with building and maintaining an open source project or design library is a plus.
Qualifications
- 3 + years of experience of Android SDK, different versions of Android, and how to deal with different screen sizes.
- Experience with local databases, offline storage, threading, and performance tuningExperience with Kotlin and Jetpack Compose is desirable.
- Familiarity with RESTful APIs to connect Android applications to back-end services
- Familiarity with the use of additional sensors, such as camera, gyroscopes and accelerometers
- Knowledge of the open-source Android ecosystem and the libraries available for common tasks
- Ability to understand business requirements and translate them into technical requirements
1. Implement high-quality cloud architectures that meet customer
requirements and are consistent with enterprise architectural standards.
2. Deep understanding of cloud computing technologies, business
drivers and emerging computing trends.
3. The ideal resource will have experience across enterprise-grade hybrid
cloud or data centre transformation.
4. Install, configure and upgrade MySQL/Postgres cluster database
software.
5. Experience in setting-up DR for RDBMS DBs in Linux environments.
6. Create, configure, manage and migrate NoSQL databases (Redis,
Cassandra and MongoDB).
7. Manage day to day operations from development to production
databases.
8. Monitor the health of cloud services and databases.
9. Good understanding of NoSQL/relational databases.
10. Troubleshoot NoSQL/RDBMS databases general/performance
issues.
11. Experience in Linux OS and scripting.
12. Hands on experience on Azure, GCP and AWS clouds and its services.
13. Knowledge in python/ansible is an added advantage.
14. Leveraging open source technologies and cloud native hosting
platforms.4
15. Design and recommend suitable, secure, performance optimised
database offerings based on business requirements.
16. Ensure security considerations are at the forefront when designing
and managing database solutions.
17. Maintenance work to be planned meticulously to minimise/eradicate
self-inflicted P1 outages.
18. Ability to provide technical system solutions, determine overall
design direction and provide hardware recommendations for complex
technical issues.
19. Provisioning, deployment, monitoring cloud environment using
automation tools like Terraform.
20. Ensure all key databases have deep insight monitoring enabled to
enable improved capabilities around fault detection.
Required Qualifications:
• Minimum 6-8 years of experience as an Database Administrator
preferably in a cloud environment.
• Application migration experience that involves migrating large
scale infra between clouds.
• Experience in executing migration cutover activities. Support UAT,
troubleshoot during and post migration issues
• Ability to work independently on multiple tasks with minimal
guidance Ability and desire to work in a fast paced environment5
• Contribute to overall capacity planning and configuration
management of the supporting infrastructure
• Review recommendations around Security, Availability,
Performance, and from Cloud platform
• Ability to remain flexible in a demanding work environment and
adapt to rapidly changing priorities.
• As a leader, you will be Facilitating discussions and lead decision-making on all
engineering aspects of his/her team.
• Able to define and execute the engineering plans for the areas under his/her ownership.
• Drive engineering best practices for the team.
• Define, implement and maintain the hygiene of the production systems (both engineering
and processes) for the areas under his/her ownership.
• Responsible for the health of the business directly owned by the team.
• Challenging business & product on outcomes, channelize feedback into execution, and
be accountable for engineering outputs
• Hiring, mentoring and retaining a best-of-class engineering team.
• Responsible for all stakeholder management including but not limited to business,
product, operations, and clients/vendors
ABOUT VOLKS CONSULTING:
Designs, builds and delivers market-leading HRMS solutions for its customers around the world. We
help drive the revolution with unmatched insight and unrivalled technology, connecting front-end
revenue and relationships with back-end execution and efficiency optimized on a common technology
platform. This platform-based approach is enabling leading companies across the globe to get closer to
their customers and achieve real-world results.
PRIMARY ROLE:
We are looking for a Technical Recruiter to join our Product Hiring team for our clients. Technical
Recruiter responsibilities include sourcing, screening and providing a shortlist of qualified candidates
for various technical roles for our clients. You will also network online and offline with potential
candidates to promote the employer brands, reduce time-to-hire and attract the best professionals.
WHAT YOU'LL DO:
• Understand our clients & become a domain expert for them.
• Understand the client’s requirements & source potential candidates from niche platforms like
stack overflow, GitHub, LinkedIn, etc.
• Parse specialized skills & qualifications to screen resumes.
• Perform pre-screening calls to analyse applicants’ abilities which includes a project based
technical screening.
• Interview candidates combining various methods (eg. – structured interviews, technical
assessment & behavioural assessment)
• Craft & send personalized recruiting emails to candidates.
• Facilitate & finalize the candidate offers & timely follow-up to ensure their joining.
SKILL SETS:
• Proven work experience as a Technical Recruiter (1-3 years).
• Sound knowledge on SDLC &various technologies.
• Passion for learning client domains, technical skills & extensive screening methodologies.
• Excellent resume sourcing skills from various platforms like job boards, LinkedIn, stack
overflow, GitHub, etc.
• Excellent verbal & written communication skills including ability to write quality
professional emails.
• Ability to change oneself in a highly productive & fast paced environment.
• Excellent people skills & coordination skills.
● Working hand in hand with application developers and data scientists to help build softwares that scales in terms of performance and stability Skills ● 3+ years of experience managing large scale data infrastructure and building data pipelines/ data products. ● Proficient in - Any data engineering technologies and proficient in AWS data engineering technologies is plus. ● Language - python, scala or go ● Experience in working with real time streaming systems Experience in handling millions of events per day Experience in developing and deploying data models on Cloud ● Bachelors/Masters in Computer Science or equivalent experience Ability to learn and use skills in new technologies






