Microsoft Windows Server Administrator - Installation & Configuration
Key Responsibilities: Proficient with Windows server 2008/2012/2016/2019
- Installations/ upgrades/ configuration/licensing and Troubleshooting
- Good understanding of daily tasks based around Microsoft technologies
- Experience in windows patching activity
- Handle patching of server through SCCM, SCCM Patch Deployment and Remediation, SCCM Package
creation
- Experience in Group policy creation and Deployment
- Secure, administer, and improve customer technical issues which can include OS level, webserver,
database server, applications server, DNS, SMTP, user management and permissions, or other software
issues
- Understanding of OS specific webhosts and database technologies, such as MSSQL/IIS for Windows
- Understanding of SSL & DNS
- Knowledge of Active Directory for Windows specialization
- Participate in DR activities
- Maintain incident management and issue resolution workflows and SOPs.
- Good understanding of AD, DNS, DHCP, File server, VMware, DC and DR.
- Good communication skill both written and verbal with exceptional attention to detail, documentation,
and organizational skills required
- Aware of ITIL Process and its lifecycle

Similar jobs
Required Skills & Qualifications:
✔ Experience: 4+ years in Cloud Engineering, with a focus on GCP.
✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).
✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.
✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.
✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.
✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.
Top college graduate with experience as Ex-Founder's Office / Ex-founder / Ex-product manager / Ex-Sales / Ex-Customer Success profile
Mandatory (Experience 1): Must have 6–10 years of experience in roles such as Ex-Founder’s Office / Ex-Customer Success / Ex-Sales / Ex-Program / customer-facing Ex-Product profiles
Mandatory (Experience 2): Must have strong strategic thinking and excellent communication skills
Mandatory (Experience 3): Must have direct external customer-facing experience, owning enterprise and/or mid-market customers, across post-sales, solutions, delivery, or customer success charters
Mandatory (Experience 4): Must have worked in startup or operator-style environments — demonstrating hands-on problem solving, building processes from scratch, and driving measurable outcomes across cross-functional teams
Mandatory (Experience 5): Must have managed and led a team of 4+ members, with ownership of outcomes
Mandatory (Experience 6): Must have experience scaling customer-facing functions (1→10 or 1→100 journeys)
Mandatory (Graduation): Only IIT candidates from Kharagpur, Mumbai, Madras, Kanpur, Delhi, Roorkee, Guwahati OR BITS Pilani (Pilani campus)
Mandatory (Graduation Year): Candidates should have graduated in 2018 or before
Mandatory (Compensation): Candidate's current CTC should not be less than 35 LPA
1. Flink Sr. Developer
Location: Bangalore(WFO)
Mandatory Skills & Exp -10+ Years : Must have Hands on Experience on FLINK, Kubernetes , Docker, Microservices, any one of Kafka/Pulsar, CI/CD and Java.
Job Responsibilities:
As the Data Engineer lead, you are expected to engineer, develop, support, and deliver real-time
streaming applications that model real-world network entities, and have a good understanding of the
Telecom Network KPIs to improve the customer experience through automation of operational network
data. Real-time application development will include building stateful in-memory backends, real-time
streaming APIs , leveraging real-time databases such as Apache Druid.
Architecting and creating the streaming data pipelines that will enrich the data and support
the use cases for telecom networks
Collaborating closely with multiple stakeholders, gathering requirements and seeking
iterative feedback on recently delivered application features.
Participating in peer review sessions to provide teammates with code review as well as
architectural and design feedback.
Composing detailed low-level design documentation, call flows, and architecture diagrams
for the solutions you build.
Running to a crisis anytime the Operations team needs help.
Perform duties with minimum supervision and participate in cross-functional projects as
scheduled.
Skills:
Flink Sr. Developer, who has implemented and dealt with failure scenarios of
processing data through Flink.
Experience with Java, K8S, Argo CD/Workflow, Prometheus, and Aether.
Familiarity with object-oriented design patterns.
Experience with Application Development DevOps Tools.
Experience with distributed cloud-native application design deployed on Kubernetes
platforms.
Experience with PostGres, Druid, and Oracle databases.
Experience with Messaging Bus - Kafka/Pulsar
Experience with AI/ML - Kubeflow, JupyterHub
Experience with building real-time applications which leverage streaming data.
Experience with streaming message bus platforming, either Kafka or Pulsar.
Experience with Apache Spark applications and Hadoop platforms.
Strong problem solving skills.
Strong written and oral communication skills.
|
Role and Responsibilities |
|
|
Required Skills |
|
Role: AUTOSAR
Location: Hyderabad, IN
Fulltime
6 to 8 + Years’ experience
Skills Needs
Must Have
- Experience in AUTOSAR (OS, COM, DCM, Mem & BswM)
- Hands on experience in working with any one of the commercial AUTOSAT stack (EB/Vector/Mentor Graphics/KPIT stack)
- Hands on experience in working with compiler settings, linker optimizations etc.
- Hands on experience with any one of the design tools (EA/Rhapsody)
- Hands on experience in working with requirement management tools, configuration management tools
- Strong debugging skills
- Should be proficient in using Trace32 debugger & writing scripts
- Proficient in understanding arxml files and component integration
- Experience/knowledge on OS, Multicore systems, RTE & Automotive ethernet protocol modules.
- Experience/knowledge on UML is an added advantage
- ADAS domain knowledge
- Should be able to provide estimations for a work package and work with SW PM in preparing the project plan
- Knowledge on ASPICE for SWE areas
Job R&R
Roles:
- Provide overall SW Arch to team
- Consider Consistency and reusability.
- Assignment of ASIL to system elements.
Responsibilities:
- Decompose into software functional components.
- Analyze the critical path based on special requirement of certain categories (e.g. Safety, Security).
- Define software component budgets for MCU resource (Runtime, RAM/ROM)
- CPU Loading and calibration strategy.
- Model the dynamic behavior
- State Mechanism and time diagrams for operational mode.
- Describe components and Interfaces.
- Evaluate the design decisions and document their justifications for future maintenance.
- Internal and external interfaces (physical, Functional and Contents) with reference to its specification in required database.
We are looking for a React Native developer interested in building performant mobile apps on both the iOS and Android platforms. You will be responsible for architecting and building these applications, as well as coordinating with the teams responsible for other layers of the product infrastructure. Building a product is a highly collaborative effort, and as such, a strong team player with a commitment to perfection is required.
Responsibilities
- Build pixel-perfect, buttery smooth UIs across both mobile platforms.
- Leverage native APIs for deep integrations with both platforms.
- Diagnose and fix bugs and performance bottlenecks for performance that feels native.
- Reach out to the open source community to encourage and help implement mission-critical software fixes—React Native moves fast and often breaks things.
- Maintain code and write automated tests to ensure the product is of the highest quality.
- Transition existing React web apps to React Native.
- Experience of providing technical leadership in the Big Data space (Hadoop Stack like Spark, M/R, HDFS, Pig, Hive, HBase, Flume, Sqoop, etc. Should have contributed to open source Big Data technologies.
- Expert-level proficiency in Python
- Experience in visualizing and evangelizing next-generation infrastructure in Big Data space (Batch, Near Real-time, Real-time technologies).
- Passionate for continuous learning, experimenting, applying, and contributing towards cutting edge open source technologies and software paradigms
- Strong understanding and experience in distributed computing frameworks, particularly Apache Hadoop 2.0 (YARN; MR & HDFS) and associated technologies.
- Hands-on experience with Apache Spark and its components (Streaming, SQL, MLLib)
Operating knowledge of cloud computing platforms (AWS, especially EMR, EC2, S3, SWF services, and the AWS CLI) - Experience working within a Linux computing environment, and use of command-line tools including knowledge of shell/Python scripting for automating common tasks
Blume Global (HQ California, www.blumeglobal.com) is a disrupter in the supply chain software market and has built the next generation cloud first Digital Supply Chain Platform for fortune 500 companies. Blume Global uses its 25+ years of data insights and global network to help enterprises be more agile, improve service delivery and reduce cost removing significant wastage from their operations.
Role Summary:
As an experienced Analyst, you will:
Primarily making our customers think you are magical by resolving complex problems through your technical and product expertise. When
you learn more about our product suite, you will be able to extend your depth of knowledge on the products you support, as well as
expand to new technology stacks and supply chain domain knowledge. To hone your technical prowess, dig deep into database, data files,
logs, and traces to find the source of any problem. Finally, you will be someone our customers trust. They will depend on you to provide
timely and accurate information to their application issues.
Responsibilities:
• Prior experience of working in Application/Production support environment.
• MySQL knowledge and SQL querying abilities are needed. Skills in Python scripting would be advantageous.
• Troubleshooting and developing new solutions that solve the root cause of customer problems in tickets elevated from our L1 support team. Work Independently in the team.
• Problem Management (identifying recurring incidents, notify L3 for permeant fixes)
• Along with Customer Success Manager, participate in Weekly & Monthly reviews with the customers.
• Writing step-by-step processes, technical solutions, and ticket updates to customers using clear and concise English.
• Study ticket patterns and suggest improvement. Identify areas that can be automated.
• Experience in Application support ticketing tool such as ServiceNow & Jira
• Thorough understanding of SLA Management & Operational reporting.
• Provide value to the Customer in line to Quality, Process Improvements & Other customer centric initiatives.
• ITIL V3 Foundation Certified and through in-Service management processes, Event Management, Incident, Problem Management & Change Management.








