11+ Desktop administration Jobs in Bangalore (Bengaluru) | Desktop administration Job openings in Bangalore (Bengaluru)
Apply to 11+ Desktop administration Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Desktop administration Job opportunities across top companies like Google, Amazon & Adobe.

Role: System Admin
Experience: 1-2 Years
Position Type- Full Time- Office
Location: Bangalore
Skill Requirements:
Linux management
Desktop support
Sophos firewall
NAS
Skills:
Experience with Cassandra, including installing configuring and monitoring a Cassandra cluster.
Experience with Cassandra data modeling and CQL scripting. Experience with DataStax Enterprise Graph
Experience with both Windows and Linux Operating Systems. Knowledge of Microsoft .NET Framework (C#, NETCore).
Ability to perform effectively in a team-oriented environment
Company Overview
Bluesapling is a fast-growing technology and digital transformation company based in Bangalore serving businesses across the globe through its world-class enterprise solutions. We are looking for people who are passionate about understanding customer needs and helping them adopt the Software solution to their advantage and a willingness to work on brand building at a young company.
Qualifications
- B.com from any prominent university
- 1+ years of experience in sales
Skills
- Working closely with the sales team in support of generating new business revenues.
- Engaging with prospect leads by interacting over calls and email.
- Scheduling demos for the sales team.
- Be able to verbally describe solution features briefly to make customers understand the solution provided.
- Relationship-building skills with clients to promote repeat business.
- Good listening skills and ability to clients’ pain points.
- Ability to work in a highly dynamic team and fast-paced environment with continuous learning.
- Basic Computer skills to keep interactions documented in the relevant application.
- A good hold of English and Hindi is a must. Knowledge of Tamil & Telugu regional languages would be an added advantage.
Responsibilities
Your top responsibility is to assist the sales team in qualifying prospects and developing a pipeline to bring revenue to the company. Engage with inbound leads and generate outbound leads. You will have to do qualification calls, inside sales calls, cold calls, live demos, solution advisory, client visits and everything else required to sell. You will have to master the software’s value propositions, features and common problems of prospects. You will need to understand prospects’ businesses and problems so as to solve those using out software.

- Responsible for designing, storing, processing, and maintaining of large-scale data and related infrastructure.
- Can drive multiple projects both from operational and technical standpoint.
- Ideate and build PoV or PoC for new product that can help drive more business.
- Responsible for defining, designing, and implementing data engineering best practices, strategies, and solutions.
- Is an Architect who can guide the customers, team, and overall organization on tools, technologies, and best practices around data engineering.
- Lead architecture discussions, align with business needs, security, and best practices.
- Has strong conceptual understanding of Data Warehousing and ETL, Data Governance and Security, Cloud Computing, and Batch & Real Time data processing
- Has strong execution knowledge of Data Modeling, Databases in general (SQL and NoSQL), software development lifecycle and practices, unit testing, functional programming, etc.
- Understanding of Medallion architecture pattern
- Has worked on at least one cloud platform.
- Has worked as data architect and executed multiple end-end data engineering project.
- Has extensive knowledge of different data architecture designs and data modelling concepts.
- Manages conversation with the client stakeholders to understand the requirement and translate it into technical outcomes.
Required Tech Stack
- Strong proficiency in SQL
- Experience working on any of the three major cloud platforms i.e., AWS/Azure/GCP
- Working knowledge of an ETL and/or orchestration tools like IICS, Talend, Matillion, Airflow, Azure Data Factory, AWS Glue, GCP Composer, etc.
- Working knowledge of one or more OLTP databases (Postgres, MySQL, SQL Server, etc.)
- Working knowledge of one or more Data Warehouse like Snowflake, Redshift, Azure Synapse, Hive, Big Query, etc.
- Proficient in at least one programming language used in data engineering, such as Python (or Scala/Rust/Java)
- Has strong execution knowledge of Data Modeling (star schema, snowflake schema, fact vs dimension tables)
- Proficient in Spark and related applications like Databricks, GCP DataProc, AWS Glue, EMR, etc.
- Has worked on Kafka and real-time streaming.
- Has strong execution knowledge of data architecture design patterns (lambda vs kappa architecture, data harmonization, customer data platforms, etc.)
- Has worked on code and SQL query optimization.
- Strong knowledge of version control systems like Git to manage source code repositories and designing CI/CD pipelines for continuous delivery.
- Has worked on data and networking security (RBAC, secret management, key vaults, vnets, subnets, certificates)
Someone who has strong industrial experience in NLP for a period of 2+ years. Experienced in applying different NLP techniques to problems such as text classification, text summarization, question &answering, information retrieval, knowledge extraction, and conversational bots design potentially with both traditional & Deep Learning Techniques. In-depth exposure to some of the Tools/Techniques: SpaCy, NLTK, Gensim, CoreNLP, NLU, NLG tools etc. Ability to design & develop practical analytical approach keeping the context of data quality & availability, feasibility, scalability, turnaround time aspects. Desirable to have demonstrated capability to integrate NLP technologies to improve chatbot experience. Exposure to frameworks like DialogFlow, RASA NLU, LUIS is preferred. Contributions to open-source software projects are an added advantage.
Experience in analyzing large amounts of user-generated content and process data in large-scale environments using cloud infrastructure is desirable
Key Responsibilities:
- Rewrite existing APIs in NodeJS.
- Remodel the APIs into Micro services-based architecture.
- Implement a caching layer wherever possible.
- Optimize the API for high performance and scalability.
- Write unit tests for API Testing.
- Automate the code testing and deployment process.
Skills Required:
- At least 2 years of experience developing Backends using NodeJS — should be well versed with its asynchronous nature & event loop, and know its quirks and workarounds.
- Excellent hands-on experience using MySQL or any other SQL Database.
- Good knowledge of MongoDB or any other NoSQL Database.
- Good knowledge of Redis, its data types, and their use cases.
- Experience with graph-based databases like GraphQL and Neo4j.
- Experience developing and deploying REST APIs.
- Good knowledge of Unit Testing and available Test Frameworks.
- Good understanding of advanced JS libraries and frameworks.
- Experience with Web sockets, Service Workers, and Web Push Notifications.
- Familiar with NodeJS profiling tools.
- Proficient understanding of code versioning tools such as Git.
- Good knowledge of creating and maintaining DevOps infrastructure on cloud platforms.
- Should be a fast learner and a go-getter — without any fear of trying out new things Preferences.
- Experience building a large scale social or location-based app.
The responsibilities of the role are per the below:
● Sales: Provide information & value proposition to prospective customers thereby converting leads into paid customers.
● CRM: Actively use & update the CRM tools to ensure data accuracy of reports & smooth transition of leads between
teams.
● Feedback: Provide customer feedback to the founding team.
The role is based in Bangalore, India. The role will be required to work out of the office in Indiranagar.
What do we need?
The ideal candidate is someone who –
● Has 1+ years of experience in a sales role.
● Has excellent communication & influencing skills
● Disciplined & follows processes
● Is customer-oriented and never over-promises
● Is a team player, works well in groups & optimizes for the team
● Is comfortable with ambiguity
If you meet the above criteria, please send us –
● A resume introducing yourself, your past achievements and why you are suited for this role
Role & Compensation
There are 3 roles we are hiring for. Compensation is INR 20,000 (fixed) + INR 10,000 (performance-based variable pay) per
month.
Work Days & Timings
We have a 6-day working week. There will be a need to cover 8 am to 9 pm from Monday to Sunday. The 3 member team will
rotate to ensure that these business hours are covered while they get 1 day of the week off.
What’s in for you?
1. Be part of the founding team
2. Work with the most experienced founders from the industry & to learn from the best.
3. Accelerated Career Growth opportunities
4. Be part of innovation in the space of performing arts.
5. Be part of a growing organization and grow with the organization.
- Developing telemetry software to connect Junos devices to the cloud
- Fast prototyping and laying the SW foundation for product solutions
- Moving prototype solutions to a production cloud multitenant SaaS solution
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources
- Build analytics tools that utilize the data pipeline to provide significant insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with partners including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics specialists to strive for greater functionality in our data systems.
Qualification and Desired Experiences
- Master in Computer Science, Electrical Engineering, Statistics, Applied Math or equivalent fields with strong mathematical background
- 5+ years experiences building data pipelines for data science-driven solutions
- Strong hands-on coding skills (preferably in Python) processing large-scale data set and developing machine learning model
- Familiar with one or more machine learning or statistical modeling tools such as Numpy, ScikitLearn, MLlib, Tensorflow
- Good team worker with excellent interpersonal skills written, verbal and presentation
- Create and maintain optimal data pipeline architecture,
- Assemble large, sophisticated data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Experience with AWS, S3, Flink, Spark, Kafka, Elastic Search
- Previous work in a start-up environment
- 3+ years experiences building data pipelines for data science-driven solutions
- Master in Computer Science, Electrical Engineering, Statistics, Applied Math or equivalent fields with strong mathematical background
- We are looking for a candidate with 9+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
- Experience with big data tools: Hadoop, Spark, Kafka, etc.
- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
- Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- Experience with AWS cloud services: EC2, EMR, RDS, Redshift
- Experience with stream-processing systems: Storm, Spark-Streaming, etc.
- Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
- Strong hands-on coding skills (preferably in Python) processing large-scale data set and developing machine learning model
- Familiar with one or more machine learning or statistical modeling tools such as Numpy, ScikitLearn, MLlib, Tensorflow
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and find opportunities for improvement.
- Strong analytic skills related to working with unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- A successful history of manipulating, processing and extracting value from large disconnected datasets.
- Proven understanding of message queuing, stream processing, and highly scalable ‘big data’ data stores.
- Strong project management and interpersonal skills.
- Experience supporting and working with multi-functional teams in a multidimensional environment.
Roles & Responsibilities:
The candidate is expected to create and lead a team of engineers and drive dev efforts for
Medibuddy and its various products.
The candidate will be part of the Engineering leadership team (IIT/IIM grads) and will help
strategise, execute product roadmap.
Work closely with Product and business teams to strategize or design the features and
product experiments.
Lead a team of 10+ Engineers [backend and/or mobile]
Responsible for engineering delivery in platform & product engineering pods at Medibuddy.
Scale the technology architecture, team and product to drive a 10x growth in next 12-24
months.
Code and Architect key features that form backbone of DocsApp
Conduct performance reviews and mentor and guide the team reporting
Drive adoption best engineering practices in the team and the organization
- Deliver high quality, scalable and maintainable code at a fast pace.Skills & Qualifications:
- 9+ yrs. relevant experience in consumer tech/ or SaaS space.
- Tier 1 background (IIT/BITS/IIIT/NIT), Pass-out from 2014 batch or earlier.
Strong Technical Background: You have strong programming, architecture, DevOps and QA
chops. You should have experience working with a diverse engineering stack in a fast-paced
environment.
Biased toward action. You must be able to do more with less and turn would-be blockers into
opportunities for growth.
Balancing Short Term with Long Term - You have demonstrated strategic execution
balancing short term tactical execution with long term vision. You should be able to adapt
quickly as per the demands of the situation.
Efficient Execution - You must be persuasive, patient, compassionate and possess exquisite
prioritization skills.
Prior management and team-building experience. You'll be managing several direct reports
initially and will have the opportunity to scale and build out a high-performing engineering
team.
Stakeholder management - You complement product and business owners with finding the
right solutions in a timely manner.
Past entrepreneurial experience is a big plus.
Strong grasp of Nodejs/Python stack
Databases/Datastores : Mysql , Redis / Memcache , MongoDB
- Basic Understanding of Android, iOS, Web Application stacks
- One of SQS, RabbitMQ, SNS
- Familiarity with AWS and its services including RDS, ECS, EBS, Cloudwatch, ELK, Redshift
Nice to have -
Logging : ELK, Cloudwatch.
Frontend : html , css , Javascript
Protocols : HTTP, XMPP, MQTT, Socket.io, TCP


