Lead Developer (IOT, Java, Azure)

at Zyvka Global Services

DP
Posted by Ridhima Sharma
icon
Remote, Bengaluru (Bangalore)
icon
5 - 12 yrs
icon
₹1L - ₹30L / yr
icon
Full time
Skills
Internet of Things (IOT)
Java
Spring Boot
SQL server
NOSQL Databases
Docker
Kubernetes
Git
Microsoft Windows Azure
SQL Azure
Lead Developer (IOT, Java, Azure)

Responsibilities

  • Design, plan and control the implementation of business solutions requests/demands.
  • Execution of best practices, design, and codification, guiding the rest of the team in accordance with it.
  • Gather the requirements and specifications to understand the client requirements in a detailed manner and translate the same into system requirements
  • Drive complex technical projects from planning through execution
  • Perform code review and manage technical debt
  • Handling release deployments and production issues
  • Coordinate stress tests, stability evaluations, and support for the concurrent processing of specific solutions
  • Participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews

Skills

  • Degree in Informatics Engineering, Computer Science, or in similar areas
  • Minimum of 5+ years’ work experience in the similar roles
  • Expert knowledge in developing cloud-based applications with Java, Spring Boot, Spring Rest, SpringJPA, and Spring Cloud
  • Strong understanding of Azure Data Services
  • Strong working knowledge of SQL Server, SQL Azure Database, No SQL, Data Modeling, Azure AD, ADFS, Identity & Access Management.
  • Hands-on experience in ThingWorx platform (Application development, Mashups creation, Installation of ThingWorx and ThingWorx components)
  • Strong knowledge of IoT Platform
  • Development experience in Microservices Architectures best practices and, Docker, Kubernetes
  • Experience designing /maintaining/tuning high-performance code to ensure optimal performance
  • Strong knowledge of web security practice
  • Experience working in Agile Development
  • Knowledge about Google CloudPlatform and Kubernetes
  • Good understanding of Git, source control procedures, and feature branching
  • Fluent in English - written and spoken (mandatory)
Read more

About Zyvka Global Services

Founded
2021
Type
Products & Services
Size
employees
Stage
Bootstrapped
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Scala Developer

at its customers successfully navigate their digital transform

Agency job
via HyrHub
Scala
Java
Spark
Amazon Web Services (AWS)
Amazon EC2
icon
Bengaluru (Bangalore)
icon
5 - 11 yrs
icon
₹15L - ₹25L / yr

Job Requirements :

- Define, implement and validate solution frameworks and architecture patterns for data modeling, data integration, processing, reporting, analytics and visualization using leading cloud, big data, open-source and other enterprise technologies.

- Develop scalable data and analytics solutions leveraging standard platforms, frameworks, patterns and full stack development skills.

- Analyze, characterize and understand data sources, participate in design discussions and provide guidance related to database technology best practices.

- Write tested, robust code that can be quickly moved into production

Responsibilities :

- Experience with distributed data processing and management systems.

- Experience with cloud technologies including Spark SQL, Java/ Scala, HDFS, AWS EC2, AWS S3, etc.

- Familiarity with leveraging and modifying open source libraries to build custom frameworks.

Primary Technical Skills :
- Spark SQL, Java/ Scala, Sbt/ Maven/ Gradle, HDFS, Hive, AWS(EC2, S3, SQS, EMR, Glue Scripts, Lambda, Step Functions), IntelliJ IDE, JIRA, Git, Bitbucket/GitLab, Linux, Oozie.


Notice Period - Max  30 -45 days only
Read more
Job posted by
Shwetha Naik

Architect - Analytics / K8s

at Product Development

Agency job
via Purple Hirez
Analytics
Data Analytics
Kubernetes
PySpark
Python
kubeflow
icon
Hyderabad
icon
12 - 20 yrs
icon
₹15L - ₹50L / yr

Job Description

We are looking for an experienced engineer with superb technical skills. Primarily be responsible for architecting and building large scale data pipelines that delivers AI and Analytical solutions to our customers. The right candidate will enthusiastically take ownership in developing and managing a continuously improving, robust, scalable software solutions.

Although your primary responsibilities will be around back-end work, we prize individuals who are willing to step in and contribute to other areas including automation, tooling, and management applications. Experience with or desire to learn Machine Learning a plus.

 

Skills

  • Bachelors/Masters/Phd in CS or equivalent industry experience
  • Demonstrated expertise of building and shipping cloud native applications
  • 5+ years of industry experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka Streams, Py Spark, and streaming databases like druid or equivalent like Hive
  • Strong industry expertise with containerization technologies including kubernetes (EKS/AKS), Kubeflow
  • Experience with cloud platform services such as AWS, Azure or GCP especially with EKS, Managed Kafka
  • 5+ Industry experience in python
  • Experience with popular modern web frameworks such as Spring boot, Play framework, or Django
  • Experience with scripting languages. Python experience highly desirable. Experience in API development using Swagger
  • Implementing automated testing platforms and unit tests
  • Proficient understanding of code versioning tools, such as Git
  • Familiarity with continuous integration, Jenkins

Responsibilities

  • Architect, Design and Implement Large scale data processing pipelines using Kafka Streams, PySpark, Fluentd and Druid
  • Create custom Operators for Kubernetes, Kubeflow
  • Develop data ingestion processes and ETLs
  • Assist in dev ops operations
  • Design and Implement APIs
  • Identify performance bottlenecks and bugs, and devise solutions to these problems
  • Help maintain code quality, organization, and documentation
  • Communicate with stakeholders regarding various aspects of solution.
  • Mentor team members on best practices
Read more
Job posted by
Aditya K
Data Warehouse (DWH)
Informatica
ETL
SQL Azure
Windows Azure
Python
PySpark
synapse
Azure data factory
Azure data bricks
icon
Remote only
icon
4 - 7 yrs
icon
₹7L - ₹10L / yr
We need Data Engineers. Below is the JD:
a. 4+ years of experience in Azure development using PySpark (Databricks) and Synapse.
b. Real world project experience in using ADF to bring in data from on-premise applications into Azure using ADF pipelines.
c. Strong working experience on transforming data using PySpark on Databricks.
d. Experience with Synapse database and transformations within Synapse
e. Strong knowledge of SQL.
f. Experience in working with multiple kinds of source systems (e.g. HANA, Teradata, MS SQL Server, flat files, JSON, etc.)
g. Strong communication skills.
h. Experience in working on Agile
Read more
Job posted by
Priya Sahni

Data Engineer

at RedSeer Consulting

Founded  •   •  employees  • 
Python
PySpark
SQL
pandas
Cloud Computing
Microsoft Windows Azure
Big Data
icon
Bengaluru (Bangalore)
icon
0 - 2 yrs
icon
₹10L - ₹15L / yr

BRIEF DESCRIPTION:

At-least 1 year of Python, Spark, SQL, data engineering experience

Primary Skillset: PySpark, Scala/Python/Spark, Azure Synapse, S3, RedShift/Snowflake

Relevant Experience: Legacy ETL job Migration to AWS Glue / Python & Spark combination

 

ROLE SCOPE:

Reverse engineer the existing/legacy ETL jobs

Create the workflow diagrams and review the logic diagrams with Tech Leads

Write equivalent logic in Python & Spark

Unit test the Glue jobs and certify the data loads before passing to system testing

Follow the best practices, enable appropriate audit & control mechanism

Analytically skillful, identify the root causes quickly and efficiently debug issues

Take ownership of the deliverables and support the deployments

 

REQUIREMENTS:

Create data pipelines for data integration into Cloud stacks eg. Azure Synapse

Code data processing jobs in Azure Synapse Analytics, Python, and Spark

Experience in dealing with structured, semi-structured, and unstructured data in batch and real-time environments.

Should be able to process .json, .parquet and .avro files

 

PREFERRED BACKGROUND:

Tier1/2 candidates from IIT/NIT/IIITs

However, relevant experience, learning attitude takes precedence

Read more
Job posted by
Raunak Swarnkar

Data Scientist

at TVS Credit Services Ltd

Founded 2009  •  Services  •  100-1000 employees  •  Profitable
Data Science
R Programming
Python
Machine Learning (ML)
Hadoop
SQL server
Linear regression
Predictive modelling
icon
Chennai
icon
4 - 10 yrs
icon
₹10L - ₹20L / yr
Job Description: Be responsible for scaling our analytics capability across all internal disciplines and guide our strategic direction in regards to analytics Organize and analyze large, diverse data sets across multiple platforms Identify key insights and leverage them to inform and influence product strategy Technical Interactions with vendor or partners in technical capacity for scope/ approach & deliverables. Develops proof of concept to prove or disprove validity of concept. Working with all parts of the business to identify analytical requirements and formalize an approach for reliable, relevant, accurate, efficientreporting on those requirements Designing and implementing advanced statistical testing for customized problem solving Deliver concise verbal and written explanations of analyses to senior management that elevate findings into strategic recommendations Desired Candidate Profile: MTech / BE / BTech / MSc in CS or Stats or Maths, Operation Research, Statistics, Econometrics or in any quantitative field Experience in using Python, R, SAS Experience in working with large data sets and big data systems (SQL, Hadoop, Hive, etc.) Keen aptitude for large-scale data analysis with a passion for identifying key insights from data Expert working knowledge in various machine learning algorithms such XGBoost, SVM Etc. We are looking candidates from the following: Experience in Unsecured Loans & SME Loans analytics (cards, installment loans) - risk based pricing analytics Experience in Differential pricing / selection analytics (retail, airlines / travel etc). Experience in Digital product companies or Digital eCommerce with Product mindset and experience Experience in Fraud / Risk from Banks, NBFC / Fintech / Credit Bureau Experience in Online media with knowledge of media, online ads & sales (agencies) - Knowledge of DMP, DFP, Adobe/Omniture tools, Cloud Experience in Consumer Durable Loans lending companies (Experience in Credit Cards, Personal Loan - optional) Experience in Tractor Loans lending companies (Experience in Farm) Experience in Recovery, Collections analytics Experience in Marketing Analytics with Digital Marketing, Market Mix modelling, Advertising Technology
Read more
Job posted by
Vinodhkumar Panneerselvam

Big Data Engineer

at A Telecom Industry

Agency job
via Multi Recruit
Big Data
Apache Spark
Java
Spring Boot
restful
icon
Bengaluru (Bangalore)
icon
6 - 10 yrs
icon
₹16L - ₹18L / yr
  • Expert software implementation and automated testing
  • Promoting development standards, code reviews, mentoring, knowledge sharing
  • Improving our Agile methodology maturity
  • Product and feature design, scrum story writing
  • Build, release, and deployment automation
  • Product support & troubleshooting

 

Who we have in mind: 

  • Demonstrated experience as a Java
  • Should have a deep understanding of Enterprise/Distributed Architecture patterns and should be able to demonstrate the relevant usage of the same
  • Turn high-level project requirements into application-level architecture and collaborate with the team members to implement the solution
  • Strong experience and knowledge in Spring boot framework and microservice architecture
  • Experience in working with Apache Spark
  • Solid demonstrated object-oriented software development experience with Java, SQL, Maven, relational/NoSQL databases and testing frameworks 
  • Strong working experience with developing RESTful services
  • Should have experience working on Application frameworks such as Spring, Spring Boot, AOP
  • Exposure to tools – Jira, Bamboo, Git, Confluence would be an added advantage
  • Excellent grasp of the current technology landscape, trends and emerging technologies
Read more
Job posted by
Sukanya J
Data Warehouse (DWH)
Informatica
ETL
Python
DevOps
Kubernetes
Amazon Web Services (AWS)
icon
Chennai
icon
1 - 8 yrs
icon
₹2L - ₹20L / yr
We are cloud based company working on secureity projects.

Good Python developers / Data Engineers / Devops engineers
Exp: 1-8years
Work loc: Chennai. / Remote support
Read more
Job posted by
sharmila padmanaban
Spark
Apache Kafka
PySpark
Internet of Things (IOT)
Real time media streaming
icon
Remote, Bengaluru (Bangalore)
icon
2 - 6 yrs
icon
₹6L - ₹15L / yr

JD for IOT DE:

 

The role requires experience in Azure core technologies – IoT Hub/ Event Hub, Stream Analytics, IoT Central, Azure Data Lake Storage, Azure Cosmos, Azure Data Factory, Azure SQL Database, Azure HDInsight / Databricks, SQL data warehouse.

 

You Have:

  • Minimum 2 years of software development experience
  • Minimum 2 years of experience in IoT/streaming data pipelines solution development
  • Bachelor's and/or Master’s degree in computer science
  • Strong Consulting skills in data management including data governance, data quality, security, data integration, processing, and provisioning
  • Delivered data management projects with real-time/near real-time data insights delivery on Azure Cloud
  • Translated complex analytical requirements into the technical design including data models, ETLs, and Dashboards / Reports
  • Experience deploying dashboards and self-service analytics solutions on both relational and non-relational databases
  • Experience with different computing paradigms in databases such as In-Memory, Distributed, Massively Parallel Processing
  • Successfully delivered large scale IOT data management initiatives covering Plan, Design, Build and Deploy phases leveraging different delivery methodologies including Agile
  • Experience in handling telemetry data with Spark Streaming, Kafka, Flink, Scala, Pyspark, Spark SQL.
  • Hands-on experience on containers and Dockers
  • Exposure to streaming protocols like MQTT and AMQP
  • Knowledge of OT network protocols like OPC UA, CAN Bus, and similar protocols
  • Strong knowledge of continuous integration, static code analysis, and test-driven development
  • Experience in delivering projects in a highly collaborative delivery model with teams at onsite and offshore
  • Must have excellent analytical and problem-solving skills
  • Delivered change management initiatives focused on driving data platforms adoption across the enterprise
  • Strong verbal and written communications skills are a must, as well as the ability to work effectively across internal and external organizations
     

Roles & Responsibilities
 

You Will:

  • Translate functional requirements into technical design
  • Interact with clients and internal stakeholders to understand the data and platform requirements in detail and determine core Azure services needed to fulfill the technical design
  • Design, Develop and Deliver data integration interfaces in ADF and Azure Databricks
  • Design, Develop and Deliver data provisioning interfaces to fulfill consumption needs
  • Deliver data models on Azure platform, it could be on Azure Cosmos, SQL DW / Synapse, or SQL
  • Advise clients on ML Engineering and deploying ML Ops at Scale on AKS
  • Automate core activities to minimize the delivery lead times and improve the overall quality
  • Optimize platform cost by selecting the right platform services and architecting the solution in a cost-effective manner
  • Deploy Azure DevOps and CI CD processes
  • Deploy logging and monitoring across the different integration points for critical alerts

 

Read more
Job posted by
Priyanka U

Data Engineer

at Surplus Hand

Agency job
via SurplusHand
Apache Hadoop
Apache Hive
PySpark
Big Data
Java
Spark
SQL
Apache HBase
icon
Remote, Hyderabad
icon
3 - 5 yrs
icon
₹10L - ₹14L / yr
Tech Skills:
• Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc)
• should have good hands-on Spark (spark with java/PySpark)
• Hive
• must be good with SQL's(spark SQL/ HiveQL)
• Application design, software development and automated testing
Environment Experience:
• Experience with implementing integrated automated release management using tools/technologies/frameworks like Maven, Git, code/security review tools, Jenkins, Automated testing, and Junit.
• Demonstrated experience with Agile or other rapid application development methods
• Cloud development (AWS/Azure/GCP)
• Unix / Shell scripting
• Web services , open API development, and REST concepts
Read more
Job posted by
Anju John

Internship- JAVA / Python / AI / ML

at Wise Source

Founded 2014  •  Product  •  20-100 employees  •  Profitable
Artificial Intelligence (AI)
Machine Learning (ML)
Internship
Java
Python
icon
Remote, Guindy
icon
0 - 2 yrs
icon
₹1L - ₹1.5L / yr
Looking out for Internship Candidates . Designation:- Intern/ Trainee Technology : .NET/JAVA/ Python/ AI/ ML Duration : 2-3 Months Job Location :Online Internship Joining :Immediately Job Type :Internship Job Description - MCA/M.Tech/ B.Tech/ BE who need 2-6 months internship project to be done. - Should be available to join us immediately. - Should be flexible to work on any Skills/ Technologies. - Ready to work in long working hours. - Must possess excellent analytical and logical skills. - Internship experience is provided from experts - Internship Certificate will be provided at the end of training. - The requirement is strictly for internship and not a permanent job - Stipend will be provided only based on the performance.
Read more
Job posted by
Wise HR
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Zyvka Global Services?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort