11+ Microsoft Exchange Jobs in Hyderabad | Microsoft Exchange Job openings in Hyderabad
Apply to 11+ Microsoft Exchange Jobs in Hyderabad on CutShort.io. Explore the latest Microsoft Exchange Job opportunities across top companies like Google, Amazon & Adobe.
Experience - 2-7yr
Location -Bangalore,Pune,Hyderabad
NP - immediate joiner to 30 days
Role & Responsibilities
You will be responsible for architecting, implementing, and optimizing Dremio-based data lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
Ideal Candidate
- Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
Preferred:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
- Exposure to Snowflake, Databricks, or BigQuery environments.
- Experience in high-tech, manufacturing, or enterprise data modernization programs.
Job Summary
We are looking for an experienced Drupal Developer with 5–10 years of expertise in Drupal CMS development, customization, and theming. The ideal candidate should be proficient in Drupal core APIs, custom module development, PHP, and front-end technologies (HTML5, CSS3, JavaScript/jQuery). The role involves building responsive, secure, and scalable web applications while collaborating with cross-functional teams.
Responsibilities
- Design, develop, and maintain Drupal-based websites and applications.
- Build and customize modules, themes, and templates for scalable solutions.
- Ensure responsive, cross-browser, and high-performance applications.
- Collaborate with UI/UX, QA, and backend teams to deliver end-to-end solutions.
- Troubleshoot, debug, and upgrade existing Drupal projects.
- Follow best practices in coding, version control, and security.
Mandatory Skills
- Strong proficiency in Drupal (7/8/9/10) including custom module development, key contributed modules, and Drupal core API.
- Solid knowledge of PHP, theme layer, and template systems.
- Expertise in HTML/HTML5, CSS/CSS3, JavaScript/jQuery for responsive websites.
- Familiarity with version control tools like Git.
- Good understanding of web security, SEO, and performance optimization.
Design and develop scalable web applications using MEAN/MERN stack.
Build & optimize AI/LLM workflows using LangChain or LangGraph.
Implement vector storage & semantic search using FAISS / Pinecone / Chroma / Milvus.
Build APIs, microservices, and integration layers.
Optimize application performance and ensure code quality.
Collaborate with cross-functional teams (product, design, backend, DevOps).
Must-Have Skills
- Strong experience in Node.js, Express.js, MongoDB, and Angular/React.
- Hands-on experience in LLM apps, RAG pipelines, Vector Databases.
- Practical knowledge of LangChain / LangGraph.
- Experience with REST APIs, authentication, and integrations.
- Solid understanding of Git, CI/CD, and cloud platforms (AWS/Azure/GCP).
Key Responsibilities
- Design, develop, and maintain full stack web applications using Java and Angular
- Build robust backend services using Java, Spring Boot, REST APIs
- Develop responsive and dynamic user interfaces using Angular, HTML, CSS, TypeScript
- Integrate frontend applications with backend services and databases
- Write clean, maintainable, and efficient code following best practices
- Participate in code reviews, testing, debugging, and performance tuning
- Collaborate with cross-functional teams (UI/UX, QA, DevOps, Product)
- Support application deployment and production issues
Required Skills & Qualifications
- Strong experience in Java (Java 8+)
- Hands-on experience with Spring Boot, Spring MVC, Spring Security
- Proficiency in Angular (Angular 8+), TypeScript
- Experience with RESTful APIs
- Good knowledge of HTML5, CSS3, JavaScript
- Experience with databases such as MySQL, PostgreSQL, Oracle, or MongoDB
- Familiarity with Git/GitHub, Maven/Gradle
- Understanding of SDLC and Agile methodologies
Hello,
Greetings from Coders Brain Technology Pvt. Ltd.
Coders Brain is a global leader in its services, digital, and business solutions that partners with its clients to simplify, strengthen, and transform their businesses. We ensure the highest levels of certainty and satisfaction through a deep-set commitment to our clients, comprehensive industry expertise, and a global network of innovation and delivery centers.
Location: Hyderabad
Position: Permanent with Coders Brain Technology Pvt. Ltd.
Experience:5+Years
Notice Period: Immediate to 7 days only
Role: QA Automation
Job Description:
- Should have experience in Automation using Selenium/Java, Cucumber, and Rest API.
- Good Automation and Manual skills
- Should have hands-on experience in Automation using Selenium with Java.
- Should have experience in working in a team in an agile environment.
- Good to have hands-on experience in BDD/Cucumber..
- Good to have technical exposure in API/Web-services testing using Postman and Rest Assured.
- Be able to work on both manual and automation testing.
- Good knowledge of testing salesforce implementation projects in Salesforce (Classic/Lightning) End to End testing.
- Good to have any salesforce certification
- Should have good knowledge of Test management tools like Jira, Zephyre, etc.
- Hands-on experience with AEM testing for 1–2 years or performance testing [good to have]
- Strong technical skills, both functional and non-functional, manual and automation, ideally in a continuous delivery environment.
- A self-motivated and enthusiastic professional who is a self-path-maker.
- An excellent communicator should be able to represent his/her work in daily meetings within the team or with clients.
If you're interested in the above requirement,
please share the below-mentioned details :
- Current CTC:
- Expected CTC:
- Current Company:
- Notice Period:
- Current Location:
- Preferred Location:
- Total-experience:
- Relevant experience:
- Highest qualification:
- DOJ(If Offer in Hand from Other company):
- Offer in hand:
Also, send your updated CV, ASAP
RedHat OpenShift (L2/L3 Expetise)
1. Setup OpenShift Ingress Controller (And Deploy Multiple Ingress)
2. Setup OpenShift Image Registry
3. Very good knowledge of OpenShift Management Console to help the application teams to manage their pods and troubleshooting.
4. Expertise in deployment of artifacts to OpenShift cluster and configure customized scaling capabilities
5. Knowledge of Logging of PODS in OpenShift Cluster for troubleshooting.
2. Architect:
- Suggestions on architecture setup
- Validate architecture and let us know pros and cons and feasibility.
- Managing of Multi Location Sharded Architecture
- Multi Region Sharding setup
3. Application DBA:
- Validate and help with Sharding decisions at collection level
- Providing deep analysis on performance by looking at execution plans
- Index Suggestions
- Archival Suggestions and Options
4. Collaboration
Ability to plan and delegate work by providing specific instructions.
PREFERRED QUALIFICATION AND SKILLS:
In depth knowledge of VUE.JS
• In depth knowledge of HTML5 • In depth knowledge of CSS3 (Less, Sass, Stylus is a plus), knowledge of BEM methodology is preferred
• Detailed knowledge of JavaScript (ES2015 is a plus)
• Experience in any JS framework(Angular, React + Redux, Vue, etc.,) is must
• Understanding of Web Accessibility WCAG guidelines.
• Must be able to mentor and guide junior technical resources.
• One who is proactive and come forward with technology related initiatives. Motivated selfstarter.
• Good Communication skills. English - Oral and Written.
Skills:
- .NET framework, VB, C#
- HTML 5, Bootstrap, CSS, Less .
- Communication Protocols like HTTP, TCP, WCF, Web API, Modbus.
- Databases- MS SQL Server 2008 R2, MySQL. Knowledge on Redis, MongoDB.
- Scripting language -Java-Script, jQuery, AngularJS, SignalR, Ajax.
- AWS SDK preferred, Cloud Migration., IIS, Self-hosting, window server,
- Other tools -Postman & Fiddler, Tortoise SVN, Mantis bug tracking, Redmine.
Key Strengths :
- Good debugging skills.
- A major responsibility includes planning development cycle, designing, module development, code merge, Unit / Integration testing etc.
- Highly motivated team player.
- Experience in working on multiple projects simultaneously.
- Quick learning of any new technologies.
- Capable to take independent responsibility as well as ability to contribute and be a productive team member.
Key responsibilities:
- Involved into discussion of design and development of new feature enhancement.
- Involved in integration with the IoT devices over the TCP binary protocol.
- Involved in design and development of Web API to expose the data uploaded by devices to third party applications.
- Coordinating with the testing team for fixing bugs.
- Coordinating with firmware (Embedded) team for device integration.
- End to end design and development of modules.
Be Part Of Building The Future
Dremio is the Data Lake Engine company. Our mission is to reshape the world of analytics to deliver on the promise of data with a fundamentally new architecture, purpose-built for the exploding trend towards cloud data lake storage such as AWS S3 and Microsoft ADLS. We dramatically reduce and even eliminate the need for the complex and expensive workarounds that have been in use for decades, such as data warehouses (whether on-premise or cloud-native), structural data prep, ETL, cubes, and extracts. We do this by enabling lightning-fast queries directly against data lake storage, combined with full self-service for data users and full governance and control for IT. The results for enterprises are extremely compelling: 100X faster time to insight; 10X greater efficiency; zero data copies; and game-changing simplicity. And equally compelling is the market opportunity for Dremio, as we are well on our way to disrupting a $25BN+ market.
About the Role
The Dremio India team owns the DataLake Engine along with Cloud Infrastructure and services that power it. With focus on next generation data analytics supporting modern table formats like Iceberg, Deltalake, and open source initiatives such as Apache Arrow, Project Nessie and hybrid-cloud infrastructure, this team provides various opportunities to learn, deliver, and grow in career. We are looking for innovative minds with experience in leading and building high quality distributed systems at massive scale and solving complex problems.
Responsibilities & ownership
- Lead, build, deliver and ensure customer success of next-generation features related to scalability, reliability, robustness, usability, security, and performance of the product.
- Work on distributed systems for data processing with efficient protocols and communication, locking and consensus, schedulers, resource management, low latency access to distributed storage, auto scaling, and self healing.
- Understand and reason about concurrency and parallelization to deliver scalability and performance in a multithreaded and distributed environment.
- Lead the team to solve complex and unknown problems
- Solve technical problems and customer issues with technical expertise
- Design and deliver architectures that run optimally on public clouds like GCP, AWS, and Azure
- Mentor other team members for high quality and design
- Collaborate with Product Management to deliver on customer requirements and innovation
- Collaborate with Support and field teams to ensure that customers are successful with Dremio
Requirements
- B.S./M.S/Equivalent in Computer Science or a related technical field or equivalent experience
- Fluency in Java/C++ with 8+ years of experience developing production-level software
- Strong foundation in data structures, algorithms, multi-threaded and asynchronous programming models, and their use in developing distributed and scalable systems
- 5+ years experience in developing complex and scalable distributed systems and delivering, deploying, and managing microservices successfully
- Hands-on experience in query processing or optimization, distributed systems, concurrency control, data replication, code generation, networking, and storage systems
- Passion for quality, zero downtime upgrades, availability, resiliency, and uptime of the platform
- Passion for learning and delivering using latest technologies
- Ability to solve ambiguous, unexplored, and cross-team problems effectively
- Hands on experience of working projects on AWS, Azure, and Google Cloud Platform
- Experience with containers and Kubernetes for orchestration and container management in private and public clouds (AWS, Azure, and Google Cloud)
- Understanding of distributed file systems such as S3, ADLS, or HDFS
- Excellent communication skills and affinity for collaboration and teamwork
- Ability to work individually and collaboratively with other team members
- Ability to scope and plan solution for big problems and mentors others on the same
- Interested and motivated to be part of a fast-moving startup with a fun and accomplished team



