11+ Microsoft Certified Professional Jobs in Bangalore (Bengaluru) | Microsoft Certified Professional Job openings in Bangalore (Bengaluru)
Apply to 11+ Microsoft Certified Professional Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Microsoft Certified Professional Job opportunities across top companies like Google, Amazon & Adobe.
1. Azure/Infra Architect
Responsibilities
- Candidate must have demonstrated experience of migrating solutions to the Azure cloud platform
- Assess, analyze current infrastructure footprint (compute, storage, network) and complete requirement gathering (HW, SW) to move individual applications to Cloud
- Complete high level and low level design for cloud infrastructure, databases and other required components.
- Build and deploy infrastructure in cloud
- Migrate OS, Database and application from on-premise to Cloud
- Apply technical knowledge and customer insights to create application modernization roadmap and architect solutions to meet business and IT needs, ensuring technical viability of new projects and successful deployments, orchestrating key resources and infusing key application development and DevOps technologies (e.g. App Service, containers, serverless, cloud native, Java/node.js, DevOps and OSS tools)
- Connect with Client team to remove key blockers
Qualifications
- Must have at least 1 Azure certification (Administrator, Developer or Architect)
- Must have 5+ experience in Cloud and Data center migration project as solution architect
- Must have in-depth understanding of compute, storage, network components including backup, monitoring and DR environment requirements
- Experience and understanding of large-scale applications portfolio in enterprise-wide environment (including migration of on-premise workloads to the cloud) required;
- Deep domain expertise of cloud application development solutions (e.g. IaaS, Serverless, API Management), container orchestration (e.g. Kubernetes, Cloud Foundry), continuous integration technologies (e.g. Jenkins, Spinnaker, Azure for DevOps, Chef, Puppet), web application server technologies, cloud application design, software architecture and practices (design/development/deployment, Agile, SCRUM, ALM), breadth of technical experience, and technical aptitude to learn and adjust to new technologies and cloud trends required;
- Experience and understanding of large-scale application development projects (including key coding skills and practices) required;
2. Azure DevOps Engineer
Responsibilities:
- Candidate will implement automation solutions on Azure using open source tools and technologies (e.g. Ansible, Jenkins, Chef) in any of the following technology tiers/products:
- Unix/Linux
- Microsoft Windows Server
- Oracle Database
- Middleware (IBM WebSphere, JBoss)
- VMware
- Candidate must have demonstrated experience of migrating solutions to the Azure cloud platform
- Candidate will provide expert level of automation solution support.
- Perform as primary Systems Administrator in a large enterprise environment.
- Perform Patch management tasks to include: maintaining current knowledge of available patches, deciding what patches are appropriate for particular systems, ensuring that patches are installed properly, testing systems after installation, and documenting all associated procedures
- Test new releases of products to ensure compatibility and minimize user impact.
- Recommend and implement system enhancements that will improve the performance and reliability of the system including installing, upgrading/patching, monitoring, problem resolution, and configuration management.
- Develop, document, and automate technical processes and procedures as needed.
- Adhere to strict Information Systems security guidelines in all cases.
Qualifications:
- Must have at least 1 Azure certification (Administrator, Developer or Architect)
- Must have experience with one or more open source tools (Ansible, Chef, Puppet, Yamal, Parker, etc.).
- Hands on experience with change automation and orchestration.
- Proficient in scripting languages (e.g. Perl, Python, Bash, Ruby, etc.)
- Hand on experience with troubleshooting and diagnoses of hardware and software problems.
- Experience installing, configuring, and maintaining computer hardware and software in a large-scale enterprise environment.
The Impact you will create on the Job
Understand and handle deploying, troubleshooting issues with Compute, Networking, Storage, Database services on AWS & Azure.
IT experience in a team handling the cloud, infrastructure and Linux, Windows operating systems.
Working on different services in AWS & Azure Cloud Platform
In depth knowledge of a wide range of AWS & Azure services in Compute, Storage, Networking, Infrastructure as a code, Serverless computing, IAM, CI/CD pipelines
Possess a thorough understanding of Internet based technologies (DNS, Security, IP Routing, SSH, FTP, HTTP/HTTPS etc.)
at Majoris Technologies
One of our premium-based customers, we are looking to hire a team of Azure .Net Architects in Bangalore/Pune/Noida, looking for Tech Geeks, who have 14+ years of experience full-time.
PFB the JD: Proven Full Stack development skills and have solid business acumen and hands-on experience in architecting solutions
-14+ years Exp
-Expertise in designing and developing Cloud Native applications using Azure Services and managing IaC using Terraform modules and ARM templates
-Cloud Technologies: Azure Services(-(Paas Services, At least experience in some of the services is good enough - Logic Apps, API Management Service, Microsoft Graph API, Azure AD, Azure functions, Cosmos db(mongo db API), Event hub, Stream Analytics, Azure SQL Server, Azure storage table, Azure Storage Queue, Azure blob storage, Azure Event grid, Azure App Service, Application Insights, Azure ARM templates, Azure Worker Roles, Azure web jobs, Azure Relay services, Azure KeyVault, Azure App Services, Logic Apps)
- App Service, Azure Functions, VMs, ASF, AKS, Azure Container Registry, Key Vault, Cosmos DB, Azure SQL, Azure AD, Azure B2C and B2B, APIM, Azure Monitor and App insight
-Containerization tools: Docker, Azure Kubernetes Services, Azure Container Registry
-Infrastructure as Code tools: Terraform, and ARM templates
-DevOps tools: Azure DevOps, CI/CD using Git and Azure Pipeline, Azure Repos
-Microsoft Technologies: ASP.NET Core, .NET Core Web API, .NET, ASP.Net MVC, ADO.Net, TPL, LINQ, PLINQ, WCF Services, ASP .Net Web API, Angular
-Databases: NoSQL/document Databases Azure Cosmos DB, Oracle
-Strong experience in SQL -Experience in Integration or service layer is recommended
How You'll Contribute:
● Redefine Fintech architecture standards by building easy-to-use, highly scalable,robust, and flexible APIs
● In-depth analysis of the systems/architectures and predict potential future breakdown and proactively bring solution
● Partner with internal stakeholders, to identify potential features implementation on that could cater to our growing business needs
● Drive the team towards writing high-quality codes, tackle abstracts/flaws in system design to attain revved-up API performance, high code reusability and readability.
● Think through the complex Fintech infrastructure and propose an easy-to-deploy modular infrastructure that could adapt and adjust to the specific requirements of the growing client base
● Design and create for scale, optimized memory usage and high throughput performance.
Skills Required:
● 5+ years of experience in the development of complex distributed systems
● Prior experience in building sustainable, reliable and secure microservice-based scalable architecture using Python Programming Language
● In-depth understanding of Python associated libraries and frameworks
● Strong involvement in managing and maintaining produc Ɵ on-level code with high volume API hits and low-latency APIs
● Strong knowledge of Data Structure, Algorithms, Design Patterns, Multi threading concepts, etc
● Ability to design and implement technical road maps for the system and components
● Bring in new software development practices, design/architecture innovations to make our Tech stack more robust
● Hands-on experience in cloud technologies like AWS/GCP/Azure as well as relational databases like MySQL/PostgreSQL or any NoSQL database like DynamoDB
- 4+ yrs of experience having strong fundamentals on Windchill Customization & Configurations, Reporting Framework Customization, Workflow Customization , Customer handling
- Strong Customization background around Form Processors, Validator, data utilities, Form Controllers etc.
- Strong Programming skills in Java/J2EE technologies – JavaScript, GWT, JQuerry, XMLs, JSPs, SQL etc.
- Deep Knowledge in Windchill architecture
- Experience in atleast one full lifecycle PLM implementation with Windchill.
- Should have strong coding skills in Windchill development and Customization, ThingWorx Navigate Development (Mandatory), Thing Worx Architecture Configurations Mashup creation, ThingWorx and Windchill Upgrade
- Should have Build and Configuration management (Mandatory) - HPQC \JIRA\Azure\SVN\GITHUB \ Ant
- Knowledge & Experience in Build and Release process
- Having worked on custom upgrade will be a plus.
- Understanding of application s development environment, Database, data management and infrastructure capabilities and constraints. Understanding of Database administration, Database design and performance Tuning
- Follow Quality processes for tasks with appropriate reviews. Participate in sharing knowledge within the team.
Years: 5-9 Years
Job Responsibilities
Primary:
- Responsible for security road map for EPDM application
- Train the CI-CD team on the required technologies security adoptation
- Lead the upskill program within the team
- Support Application architect with right inputs on security processes and tools
- Help setup DevSecOps for EPDM.
- Find Security vulnerability in development process and sealed secretes
- Support in defining the Three-tier architecture.
Secondary:
- Coordination with different IT stakeholders as and when needed
- Suggestion and Implementation of further tool chains towards DevOps and GitOps
- Responsible to train the peer colleagues
Skills:
Mandatory skill:
- Expert knowledge of container solutions. Must have >3 years of experience working with networking & debugging within Docker and Kubernetes.
- Hands-on experience with Kubernetes workload deployments using Kustomize & Helm.
- Good understanding of Bitnami, Hashicorp and other secrete management tools
- SAST/DAST integration in CI/CD pipeline - design, implementation Expert knowledge of Source Control Systems, build & integration tools (e.g., GIT, Jenkins & Maven).
- Hands-on experience with designing the CI/CD architecture & building pipelines (on On-prem, Cloud & Hybrid infrastructure services).
- Experience with Security log management tools (e.g. Splunk ELK/EFK stack, Azure monitor or similar).
- Experience with monitoring tools like Prometheus-Grafana & Dynatrace.
- Experience with Infrastructure as a Service / Cloud computing (preferably Azure).
- Expert in writing automation scripts in Yaml, Unix shell, linux shell.
- Pulumi would be added advantage.
We have an opportunity for a Lead Operations Engineer role with our client at Bangalore /Gurgaon. Sharing the JD for your reference. Please revert if you would be interested in this opportunity and we can connect accordingly.
JOB DETAILS
Shift timing:
9.00AM-6.00PM / 11.00AM -8.00PM / 2.00PM – 11.00PM / 7.00PM -3.00AM IST(Night shift allowance will be provided)
Position
Lead Operations Engineer
Location
Bangalore/ Gurgaon
About Our client
Who we are :
At a time when consumers are connected and empowered like never before,
Our client is helping the world's largest brands provide amazing experiences at every turn. It offers a set of powerful social capabilities that allow our clients to reach, engage, and listen to customers across 24
social channels. We empower entire organizations to work together across social, marketing, advertising,
research, and customer care to manage customer experience at scale. Most exciting, Our client works with 50% of the Fortune 500 and nine of the world's 10 most valued brands, including McDonald's, Nestle, Nike,
P&G, Shell, Samsung, and Visa.
What You'll Do
What You’ll Do As a Lead Operations Engineer at our client, you should be passionate about working on new technologies, high profile projects, and are motivated to deliver solutions on an aggressive schedule.
Candidates from product based companies only.
1. 5-7 years of exposure and working knowledge of data centers on-premise or on AWS/Azure/GCP.
2. Working Experience on Jenkins, Ansible, Git, Release & Deployments
3. Working Experience on ELK, Mongo, Kafka, Kubernetes.
4. Implement and operate SaaS environments hosting multiple applications and provide production support.
5. Contribute to automation and provisioning of environments.
6. Strong Linux systems administration skills with RHCE/Centos.
7. Have scripting knowledge in one of the following – Python/Bash/Perl.
8. Good knowledge on Gradle, Maven, etc
9. Should have knowledge of service monitoring via Nagios, Sensu, etc
10. Good to have knowledge on setting up and deploying application servers .
11. Mentoring Team members
● Good experience with Continuous integration and deployment tools like
Jenkins, Spinnaker, etc.
● Ability to understand problems and craft maintainable solutions.
● Working cross-functionally with a broad set of business partners to understand
and integrate their API or data flow systems with Xeno, so a minimal
understanding of data and API integration is a must.
● Experience with docker and microservice based architecture using
orchestration platforms like Kubernetes.
● Understanding of Public Cloud, We use Azure and Google Cloud.
● Familiarity with web servers like Apache, nginx, etc.
● Possessing knowledge of monitoring tools such as Prometheus, Grafana, New
Relic, etc.
● Scripting in languages like Python, Golang, etc is required.
● Some knowledge of database technologies like MYSQL and Postgres is
required.
● Understanding Linux, specifically Ubuntu.
● Bonus points for knowledge and best practices related to security.
● Knowledge of Java or NodeJS would be a significant advantage.
Initially, when you join some of the projects you’d get to own are:
● Audit and improve overall security of the Infrastructure.
● Setting up different environments for different sets of teams like
QA,Development, Business etc.
- Deploy company Application on customer public cloud and on-premise data centers
- Building Kubernetes based workflows for wide variety of use cases
- Document and Automate the deployment process for internal and external deployments
- Interacting with customers over call to deployment and debugging
- Deployment and Product Support
Desired Skills and Experience
- 4-6 years of experience in infrastructure development, or development and operations.
- Minimum 2+ years of experience in docker and kubernetes.
- Experience working with Docker and Kubernetes. Aware of Kubernetes Internals, Networking etc. Experience with Linux infrastructures tools.
- Good interpersonal skills and communication with all levels of management.
- Extensive experience in setting up Kubernetes on AWS, Azure etc.
Good to Have
- Familiarity with Big Data Tools like Hadoop, Spark.
- Experience with Java Application Debugging.
- Experience in monitoring tools like Prometheus, Grafana etc
2) Expertise in developing OLAP cubes and developing complex calculations, Aggregations, implementing a dynamic security model using MDX/ DAX functions in - Azure Analysis service- or (SSAS)
3) Extensively used performance monitor/SQL profiler/DMVs to solve deadlocks, to monitor long-running queries and trouble-shoot cubes SQL and T-SQL. Roles & Responsibilities : 1) "SSAS" OR "Azure Analysis services" Lead Developer with 7+ years of experience in SSAS Azure Data Model Development, SSAS Data model Deployment in Azure, Querying data from SSAS Azure to build Reports.
2) Design and Create SSAS/OLAP/OLTP/Tabular cubes and automate processes for analytical needs.
3) Writing optimized SQL queries for integration with other applications, Maintain data quality and overseeing database security, Partitions and Index.
- Working knowledge of setting up and running HD insight applications
- Hands on experience in Spark, Scala & Hive
- Hands on experience in ADF – Azure Data Factory
- Hands on experience in Big Data & Hadoop ECO Systems
- Exposure to Azure Service categories like PaaS components and IaaS subscriptions
- Ability to Design, Develop ingestion & processing frame work for ETL applications
- Hands on experience in powershell scripting, deployment on Azure
- Experience in performance tuning and memory configuration
- Should be adaptable to learn & work on new technologies
- Should have Communication Good written and spoken