11+ SCADA Jobs in Hyderabad | SCADA Job openings in Hyderabad
Apply to 11+ SCADA Jobs in Hyderabad on CutShort.io. Explore the latest SCADA Job opportunities across top companies like Google, Amazon & Adobe.
AUTOMATION ENGINEER JOB DESCRIPTION: -
To develop functional design including control write-up and control schematics.
To carry out engineering SCADA programming and interface programming.
To develop process functional design including control write-up.
And control schematics.
To do integration with PLC/DDC and other controllers, routers and gateways.
To develop Hardware & Software design specifications and other software/Test documentation.
To do panel Design and instrumentation design and to prepare the relevant drawings and
method statement.
To carry out all sorts of modifications in software, as per the field requirements.
To carry out all troubleshooting remotely or on field.
FUNCTIONAL REQUIREMENTS: -
Minimum 3 to 4 years of relevant experience in industrial control systems. The same in pharma
industry would be an added advantage.
Degree/Diploma in Electrical/Electronics/Instrumentation Engineering/Computer Engineering.
Experience in control systems architecture, hardware & system software, HMI, PLC ,Data
acquisition is must.
Knowledge in major PLCs from Siemens, Allen Bradley, Schneider, GE Fanuc, Omron &
Mitsubishi would be preferred.
Knowledge of major SCADA like Siemens SIMATIC wincc V7, Wonderware IN touch, Mitsubishi
MC Works 64,Citect,Clearscada, Ifix, Cimplicity would be preferred.
Knowledge of standard communication protocols like Modbus, Modbus X, Profibus, ASCII, DNP,
DNP3 would be preferred.
Experience in panel design & Instrumentation design is must.
Experience in writing software programmes in C/C++/C#, Oracle and PL/SQL
Experience in Site Testing & commissioning with user is must.
Experience in system Training and system O&M documentation is must.
Support systems via remote computer access, providing phone support to onsite operations is
must.
Knowledge and proficiency in operation and maintenance of MS Windows based computers, MS
office applications is must.
Must be self-motivated with the ability to work alone, with multiple teams consisting of
operational staff, client managers, vendors and contractors if any, via on site presence,
telephone and email.
DESIRED SKILLS: -
SCADA communication data flow & protocols.
RTU/EFM devices and other SCADA related field devices.
SCADA system processes.
PREFERRED LICENCES/CERTIFICATIONS: -
SCADA
CCNA
CCNP
CISSP
JOB RESPONSIBILITIES: -
Would be a single point of contact for all associated SCADA installation design activities.
Would be responsible for the delivery of SCADA installation designs.
Estimation of project cost.
Competent to clearly present information in both one-on-one and group settings.
You would be monitoring device changes, investigate and document change reasons.
You would be required to conduct periodic vulnerability assessments.
Drive the completion of all required implementation processes and tollgate steps including
requirements gathering, process flow documentation, testing, data validation, training, post go-
live support and other project related activities
Work with the process engineering and quality team leaders to identify and deliver process
simplification solutions that drive reduction solutions that drive reductions in cost and cycle
time as well improved quality.
Build strong partnerships with cross functional team members
Create a consistent communication rhythm on project status and operational improvements.
Manage and co-ordinate the planning, definition and provision of system engineering project
lifecycle deliverables
Position – SRE Engineer
Location – Navi Mumbai
Who are we
Based out of IIT Bombay, HaystackAnalytics is a HealthTech company creating clinical genomics products, which enable diagnostic labs and hospitals to offer accurate and personalized diagnostics. Supported by India's most respected science agencies (DST, BIRAC, DBT), we created and launched a portfolio of products to offer genomics in infectious diseases. Our genomics-based diagnostic solution for Tuberculosis was recognized as one of the top innovations supported by BIRAC in the past 10 years, and was launched by the Prime Minister of India in the BIRAC Showcase event in Delhi, 2022.
About the Role:
We are seeking a highly skilled Site Reliability Engineer (SRE) to join our infrastructure team. The ideal candidate will bring deep expertise in Linux systems, cloud infrastructure automation, and network administration, with a strong focus on reliability, scalability, and performance. ensuring firewall/network security compliance.
Key Responsibilities:
- Develop and maintain automation scripts (Bash, Python, etc.) for system and cloud infrastructure operations.
- Manage, monitor, and troubleshoot Linux servers in production environments
- Familiarity with Network tools in Linus to triage network connectivity and performance issues .
- Configure, maintain, and secure network infrastructure including switches, routers, and firewalls.
- Design and execute Network Continuity Plans (NCP) and disaster recovery strategies.
- Collaborate with cross functional team for triaging and resolving site specific issues in production related to server deployment , SOP adherence , Triage and defect resolution
- Collaborate with developers and DevOps teams to define SLAs, SLOs, and error budgets.
- Implement proactive monitoring and alerting using modern observability tools.
- Participate in on-call rotations and incident response processes.
Objectives of this Role:
- Build, automate, and manage scalable infrastructure and deployment pipelines to support development and production environments.
- Enable rapid, secure, and reliable software delivery across engineering teams.
- Ensure system availability, performance, and security in cloud and containerized environments.
- Implement and enforce best practices in CI/CD, observability, and incident response.
- Collaborate with cross-functional teams including software developers, QA, and product managers.
- Proactively identify and automate manual processes to improve engineering efficiency.
- 3–7 years of experience as an SRE, DevOps, or Systems/Network Engineer.
- Strong scripting skills (e.g., Bash, Python, or Go).
- Proficiency in Linux administration and performance tuning.
- Deep understanding of TCP/IP, routing, DNS, firewalls, and networking protocols.
- Hands-on experience managing network infrastructure and firewall rules (e.g., iptables, pfSense, Palo Alto, etc.).
- Docker (multi-stage builds, docker-compose)
Soft Skills
- Proactive problem-solving and ownership mindset
- Strong documentation and communication skills
- Ability to mentor junior engineers
- Curiosity to learn and experiment with emerging DevOps tools
Preferred Qualifications:
- Certifications such as CKA/CKAD, RHCE, AWS Certified SysOps Administrator, or Cisco CCNA/CCNP.
- Experience with service meshes (e.g., Istio), observability stacks (e.g., Prometheus, Grafana), or network policy management (e.g., Calico, Cilium).
- Exposure to secure DevOps practices and compliance standards (e.g., ISO, NIST).
To know more about us – https://haystackanalytics.in
- Position: Appian Tech Lead
- Job Description:
- Extensive experience in Appian BPM application development
- Knowledge of Appian architecture and its objects best practices
- Participate in analysis, design, and new development of Appian based applications
- Mandatory Team leadership and provide technical leadership to Scrum teams Certification Mandatory- L1, L2 or L3
- Must be able to multi-task, work in a fast-paced environment and ability to resolve problems faced by team
- Build applications: interfaces, process flows, expressions, data types, sites, integrations,
- Proficient with SQL queries and with accessing data present in DB tables and views
- Experience in Analysis, Designing process models, Records, Reports, SAIL, forms, gateways, smart services, integration services and web services
- Experience working with different Appian Object types, query rules, constant rules and expression rules
Qualifucations
- At least 6 years of experience in Implementing BPM solutions using Appian 19.x or higher
- Over 8 years in Implementing IT solutions using BPM or integration technologies
- Experience in Scrum/Agile methodologies with Enterprise level application development projects
- Good understanding of database concepts and strong working knowledge any one of the major databases e g Oracle SQL Server MySQL
- Appian BPM application development on version 19.x or higher
- Experience of integrations using web services e g XML REST WSDL SOAP API JDBC JMS
- Good leadership skills and the ability to lead a team of software engineers technically
- Experience working in Agile Scrum teams
- Good Communication skills
The Sr. Analytics Engineer would provide technical expertise in needs identification, data modeling, data movement, and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective.
Understands and leverages best-fit technologies (e.g., traditional star schema structures, cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges.
Provides data understanding and coordinates data-related activities with other data management groups such as master data management, data governance, and metadata management.
Actively participates with other consultants in problem-solving and approach development.
Responsibilities :
Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs.
Perform data analysis to validate data models and to confirm the ability to meet business needs.
Assist with and support setting the data architecture direction, ensuring data architecture deliverables are developed, ensuring compliance to standards and guidelines, implementing the data architecture, and supporting technical developers at a project or business unit level.
Coordinate and consult with the Data Architect, project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels.
Work closely with Business Analysts and Solution Architects to design the data model satisfying the business needs and adhering to Enterprise Architecture.
Coordinate with Data Architects, Program Managers and participate in recurring meetings.
Help and mentor team members to understand the data model and subject areas.
Ensure that the team adheres to best practices and guidelines.
Requirements :
- Strong working knowledge of at least 3 years of Spark, Java/Scala/Pyspark, Kafka, Git, Unix / Linux, and ETL pipeline designing.
- Experience with Spark optimization/tuning/resource allocations
- Excellent understanding of IN memory distributed computing frameworks like Spark and its parameter tuning, writing optimized workflow sequences.
- Experience of relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., Redshift, Bigquery, Cassandra, etc).
- Familiarity with Docker, Kubernetes, Azure Data Lake/Blob storage, AWS S3, Google Cloud storage, etc.
- Have a deep understanding of the various stacks and components of the Big Data ecosystem.
- Hands-on experience with Python is a huge plus
Main tasks
- Supervision of the CI/CD process for the automated builds and deployments of web services and web applications as well as desktop tool in the cloud and container environment
- Responsibility of the operations part of a DevOps organization especially for development in the environment of container technology and orchestration, e.g. with Kubernetes
- Installation, operation and monitoring of web applications in cloud data centers for the purpose of development of the test as well as for the operation of an own productive cloud
- Implementation of installations of the solution especially in the container context
- Introduction, maintenance and improvement of installation solutions for development in the desktop and server environment as well as in the cloud and with on-premise Kubernetes
- Maintenance of the system installation documentation and implementation of trainings
Execution of internal software tests and support of involved teams and stakeholders
- Hands on Experience with Azure DevOps.
Qualification profile
- Bachelor’s or master’s degree in communications engineering, electrical engineering, physics or comparable qualification
- Experience in software
- Installation and administration of Linux and Windows systems including network and firewalling aspects
- Experience with build and deployment automation with tools like Jenkins, Gradle, Argo, AnangoDB or similar as well as system scripting (Bash, Power-Shell, etc.)
- Interest in operation and monitoring of applications in virtualized and containerized environments in cloud and on-premise
- Server environments, especially application, web-and database servers
- Knowledge in VMware/K3D/Rancer is an advantage
- Good spoken and written knowledge of English
Job Responsibilities
- Design, build & test ETL processes using Python & SQL for the corporate data warehouse
- Inform, influence, support, and execute our product decisions
- Maintain advertising data integrity by working closely with R&D to organize and store data in a format that provides accurate data and allows the business to quickly identify issues.
- Evaluate and prototype new technologies in the area of data processing
- Think quickly, communicate clearly and work collaboratively with product, data, engineering, QA and operations teams
- High energy level, strong team player and good work ethic
- Data analysis, understanding of business requirements and translation into logical pipelines & processes
- Identification, analysis & resolution of production & development bugs
- Support the release process including completing & reviewing documentation
- Configure data mappings & transformations to orchestrate data integration & validation
- Provide subject matter expertise
- Document solutions, tools & processes
- Create & support test plans with hands-on testing
- Peer reviews of work developed by other data engineers within the team
- Establish good working relationships & communication channels with relevant departments
Skills and Qualifications we look for
- University degree 2.1 or higher (or equivalent) in a relevant subject. Master’s degree in any data subject will be a strong advantage.
- 4 - 6 years experience with data engineering.
- Strong coding ability and software development experience in Python.
- Strong hands-on experience with SQL and Data Processing.
- Google cloud platform (Cloud composer, Dataflow, Cloud function, Bigquery, Cloud storage, dataproc)
- Good working experience in any one of the ETL tools (Airflow would be preferable).
- Should possess strong analytical and problem solving skills.
- Good to have skills - Apache pyspark, CircleCI, Terraform
- Motivated, self-directed, able to work with ambiguity and interested in emerging technologies, agile and collaborative processes.
- Understanding & experience of agile / scrum delivery methodology
Job Responsibilities:
The role is responsible for designing, coding, and modifying web apps, from layout to function and according to the specifications. Integrating data from various back-end services and databases.
Job Duties
• Candidate must have a strong understanding of UI, cross-browser compatibility, general web functions, and standards.
• The position requires constant communication with colleagues.
• Experience in working with distributed environments
• Deep expertise and hands-on experience with Web Applications and programming languages such as HTML, CSS, JavaScript, JQuery, and API.
Skills:
Proven working experience (2 years) in:
• Node.js
• Angular
• React
Employment Type
Full-time

at Plexcel Info Systems Pvt Ltd
Job Description
✓ 7-9 Yrs. overall hands-on experience in QA, at least 3+ years in Test Automation.
✓ Collaborates with testing team and application developers to improve the overall quality of
software programs, ensuring quality throughout the software development life cycle.
✓ Expertise in working with tools like Eclipse, Selenium, Cucumber, Appium, Serenity, Jenkins,
Junit and Maven.
✓ Hands on programing experience in JAVA.
✓ Excellent knowledge of Serenity BDD framework with Java and Selenium skills.
✓ Experience in Designing, developing, debugging and executing Automation Scripts.
✓ Proficiency in writing clean, modular, reusable code using design patterns.
✓ Proficiency at identifying and analyzing the root cause of complex bugs in your code as well as
other's code.
✓ Create and run automation test cases against web interfaces and APIs.
✓ Preferable experience on Serenity testing framework.
✓ Must have experience on Cucumber, Selenium, BDD.
✓ Knowledge on Source Code Management - GitHub/Bitbucket.
✓ Expertise in developing test automation and Continuous Integration (CI) and Continuous
Delivery (CD) solutions using Jenkins.
✓ Strong ability to write automation scripts using Java.
✓ Well versed in Gherkin.
✓ Familiarity with Agile frameworks is a plus.
✓ Knowledge of Issue tracking and project management tool JIRA/Zephyr
✓ Knowledge of any automation tools like Test Complete is an additional plus.
✓ Should be able to take up manual tasks as required.
✓ Creative thinking, good problem solving skills.
Skills / Expertise needed:
✓ Provide technology leadership for enhancing test automation capability at organization level
✓ Ability and proven experience in advising the management on test automation strategy, setting
direction for long term automation plan, bringing in best practices and ROI evaluation
✓ Possess in-depth knowledge of multiple open source/commercial tools for Web test automation
(QTP/ UFT, Selenium, Test Complete, Coded UI, Tosca etc.) and Mobile test automation
(Ranorex, Appium, Robotium, SeeTest, etc.)
✓ Web and mobile testing (iOS, Android, and Windows), automation frameworks, Continuous
Integration tools (Jenkins) and infrastructure for automation
✓ Sound understanding of agile and modular frame work methodologies, concepts
✓ Providing end-to-end delivery support for testing engagements
✓ Reviewing and implementing best-fit solutions among options evaluated
✓ Ability to implement quantitative test process measurement techniques
✓ Integrating manual and automated test effort
✓ Understanding software quality best practices, test strategy and planning, test case
development, test case execution, deployment and, test data, defect tracking, and test
4-6 years of total experience in data warehousing and business intelligence
3+ years of solid Power BI experience (Power Query, M-Query, DAX, Aggregates)
2 years’ experience building Power BI using cloud data (Snowflake, Azure Synapse, SQL DB, data lake)
Strong experience building visually appealing UI/UX in Power BI
Understand how to design Power BI solutions for performance (composite models, incremental refresh, analysis services)
Experience building Power BI using large data in direct query mode
Expert SQL background (query building, stored procedure, optimizing performance)
Software Engineer – C++ (3-6 years of experience)
1. Telecom/Volte LTE 2g 3g Preferred
2. Programming knowledge of multi-threading, sockets, IPCs.
3. Well versed with std and boost libraries.
4. Working knowledge of GNU compilers, optimization techniques on Unix/Linux based systems.
5. Proficient in debugging tools like GDB/Valgrind and profiling tools like oprofile.
6. Knowledge of Diameter (AAA) Stack
- Development experience of communication protocol stacks
- Hands on experience in multi-threaded design techniques and implementation
- Good hands-on experience on data structures and algorithms





