
Pyspark Data Engineer:
- Hands-on expertise in designing, building, and maintaining Apache Spark pipelines in production environments.
- Proven experience building and scaling data ingestion frameworks that integrate data from multiple source systems, with a focus on reliability, reusability, and scalability.
- Deep understanding of Spark architecture (driver/executors, DAG, partitioning, shuffles, caching, cluster resource management) and experience operating pipelines at scale, including data transformations on datasets ~500 GB+.
- Strong understanding of Oracle SQL and HDFS, including handling file formats and applying appropriate data cleansing, normalization, and formatting to produce curated output datasets.
- Ability to write Python, Pyspark, and shell scripts to process, transform, and automate data workflows. The Candidate should be good in writing application programs and automation manual data processing steps using python.

Similar jobs
Required Skills and Qualifications :
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- Proven experience as a Data Modeler or in a similar role at a asset manager or financial firm.
- Strong Understanding of various business concepts related to buy side financial firms. Understanding of Private Markets (Private Credit, Private Equity, Real Estate, Alternatives) is required.
- Strong understanding of database design principles and data modeling techniques (e.g., ER modeling, dimensional modeling).
- Knowledge of SQL and experience with relational databases (e.g., Oracle, SQL Server, MySQL).
- Familiarity with NoSQL databases is a plus.
- Excellent analytical and problem-solving skills.
- Strong communication skills and the ability to work collaboratively.
Preferred Qualifications:
- Experience in data warehousing and business intelligence.
- Knowledge of data governance practices.
- Certification in data modeling or related fields.
Key Responsibilities :
- Design and develop conceptual, logical, and physical data models based on business requirements.
- Collaborate with stakeholders in finance, operations, risk, legal, compliance and front offices to gather and analyze data requirements.
- Ensure data models adhere to best practices for data integrity, performance, and security.
- Create and maintain documentation for data models, including data dictionaries and metadata.
- Conduct data profiling and analysis to identify data quality issues.
- Conduct detailed meetings and discussions with business to translate broad business functionality requirements into data concepts, data models and data products.
Candidate should have 6-9 years of experience in IoT embedded systems. He/ She should be
passionate, tech savvy, academically sound, have interest in embedded devices and
technologies.
Experience:
• Exposure to Microcontroller/Microprocessor Architecture /Family e.g. ARM Cortex,
Microchip, Xtensa ESP32, TI CC32XX, STM32
• Knowledge of Bare metal and any RTOS (Free RTOS, uC OS, EmbOSs, VxWorks, QNX)
• Knowledge of Microcontroller peripherals and low level drivers e.g. ADC, DAC, I2C, SPI,
UART, CAN, RS485, DMA, Ethernet, Display
• Knowledge of networking concepts like OSI layers, embedded TCP/IP stacks and common
IP protocols
• Knowledge of RF protocols WIFI, Bluetooth/BLE, IoT Cellular
• Knowledge of IoT communication protocols MQTT, COAP, AMQP
• Knowledge of Build toolchains and framework such as IAR, GCC, Keil, Mplab
• DFMA and DFMEA, Design release process
• Coding standards, guidelines and compliance tools
• Version control and repositories using git tools
• Software quality assurance and automated testing tools Experience / Skills Embedded software design cycle
• Documenting Software Design (flow charts, state diagram, logic design, analysis, implementation, debugging and testing etc)
• Good hands-on programming in Embedded C, C++.
• Programming in scripting languages such as batch, shell, python is a plus.
• Experience with AWS/Google cloud for device connectivity. Exposure to IoT cloud services. e.g. AWS IoT
• Software and Hardware integration testing and troubleshooting
• Protocol debugging using protocol analyzer.
• Understanding of Schematic/ Hardware design around microcontrollers like ST, TI, Atmel, Microchip, ARM core

Full Stack Developer – India/Ahmedabad
- Hybrid app development in Flutter, with Magento 2 hybrid ecommerce
- Proficiency with fundamental front end languages such as HTML, CSS and JavaScript.
- Familiarity with JavaScript frameworks such as Angular JS, React JS, Ionic and Amber.
- Proficiency with server language Java(Spring Framework)
- Familiarity with database technology such as MySQL, Oracle and MongoDB, NoSQL, ElasticSearch and other database technologies.
- English language
- Experience: 3 to 5 years
- Education: software engineer or self-education. Speed, knowledge, and competencies are key factor.
- working days based on UAE calendar/min 20 days per month, working hours 9 AM – 6:30 PM/Ahmedabad time zone
Technologies & Languages
- Azure
- Databricks
- SQL Sever
- ADF
- Snowflake
- Data Cleaning
- ETL
- Azure Devops
- Intermediate Python/Pyspark
- Intermediate SQL
- Beginners' knowledge/willingness to learn Spotfire
- Data Ingestion
- Familiarity with CI/CD or Agile
Must have:
- Azure – VM, Data Lake, Data Bricks, Data Factory, Azure DevOps
- Python/Spark (PySpark)
- SQL
Good to have:
- Docker
- Kubernetes
- Scala
He/she should have a good understanding in:
- How to build pipelines – ETL and Injection
- Data Warehousing
- Monitoring
Responsibilities:
Must be able to write quality code and build secure, highly available systems.
Assemble large, complex data sets that meet functional / non-functional business requirements.
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc with the guidance.
Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
Monitoring performance and advising any necessary infrastructure changes.
Defining data retention policies.
Implementing the ETL process and optimal data pipeline architecture
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
Create design documents that describe the functionality, capacity, architecture, and process.
Develop, test, and implement data solutions based on finalized design documents.
Work with data and analytics experts to strive for greater functionality in our data systems.
Proactively identify potential production issues and recommend and implement solutions
- Design and implementation of the frontend applications and Mobile apps
- “Pixel-perfect” implementation of our approved user interface
- Ability to understand business requirements and translate them into technical requirements
- Break features into simpler granular tasks, estimate effort required, and identify dependencies
- Work with Analyst and back end developers to deliver a seamless user experience
- Write clean, efficient, maintainable code with good test coverage
- Build reusable code and libraries for future use and optimize the application for maximum speed and scalability
- Test and Debug as required
Expected Qualifications and Key Skills
- BSc or B.E in Computer Science / Engineering from a tier-1 college in India or abroad
- Proficiency in JavaScript, including DOM manipulation and the JavaScript object model
- Strong understanding of UI/UX, responsive design, cross-browser compatibility, AJAX, and general web standards
- Thorough understanding of React.js and its core principles
- Experience with popular React.js workflows (such as Flux or Redux)
- Familiarity with newer specifications of EcmaScript/TypeScript
- Experience with data structure libraries (e.g., Immutable.js)
- Knowledge of isomorphic React is a plus
- Familiarity with RESTful APIs
- Knowledge of modern authorization mechanisms, such as JSON Web Token
- Familiarity with modern front-end build pipelines and tools
- Experience with common front-end development tools such as Babel, Webpack, NPM, etc.
- Proficiency with Git / Version control
Location : Remote-Anywhere from India as permanent work from home option available
Required Skills:
• Strong coding experience in Python and Flask Web Framework.
• Information retrieval – Web Scraping.
• Experience with NoSQL data storage like MongoDB.
• Good knowledge of Asynchronous task schedulers like Celery
• Experience working with large scale databases and storage.
• Knowledge of Javascript with Node.js is a plus.
• Knowledge of working with various front end technologies and how various websites are built. Sound understanding of Asynchronous Programming in python like AsyncIO
Experience: 5+yrs
Skills Required: -
Experience in Azure Administration, Configuration and Deployment of WindowsLinux VMContainer
based infrastructure Scripting Programming in Python, JavaScriptTypeScript, C Scripting PowerShell ,
Azure CLI and shell Scripts Identity, Access Management and RBAC model Virtual Networking, storage,
and Compute Resources
Azure Database Technologies. Monitoring and Analytics Tools in Azure
Azure DevOps based CICD Build pipeline integrated with GitHub – Java and Node.js
Test Automation and other CICD Tools
Azure Infrastructure using ARM template Terrafor
Position profile
Senior Software Engineer
Responsibilities and duties
Skill Set
Primary
- 4-7 years’ experience developing scalable Mobile, Web and Cloud based applications.
- 3-4 years’ experience with hybrid mobile applications using IonicJS.
- 3-4 years’ experience with backend frameworks: Node.js, Express.js, Mongoose.js.
- 3-4 years’ experience with JavaScript Frameworks such as Angular, CSS and HTML5.
- 1-2 years’ experience with Azure Cloud, Azure PaaS (Functions, Service Bus, etc.).
- 1-2 years’ experience with SQL Server and MongoDB database.
- Experience in developing applications involving data extraction, transformation, and visualization using tools like Power BI.
- Very strong experience with the orchestration of multiple asynchronous API calls via call backs, promises, and async/await.
- Hands-on experience with TDD, using libraries such as Mocha/Jasmine.
- Define and implement application & infrastructure migration methodologies and techniques to migrate workloads into Azure.
- Knowledge of CI/CD, Jenkins pipelines would be a plus.
- Knowledge of AWS Cloud would be a plus.
- Experience in developing native iOS and Android applications would be a plus.
Skill: Python, Docker or Ansible , AWS
➢ Experience Building a multi-region highly available auto-scaling infrastructure that optimizes
performance and cost. plan for future infrastructure as well as Maintain & optimize existing
infrastructure.
➢ Conceptualize, architect and build automated deployment pipelines in a CI/CD environment like
Jenkins.
➢ Conceptualize, architect and build a containerized infrastructure using Docker,Mesosphere or
similar SaaS platforms.
Work with developers to institute systems, policies and workflows which allow for rollback of
deployments Triage release of applications to production environment on a daily basis.
➢ Interface with developers and triage SQL queries that need to be executed inproduction
environments.
➢ Maintain 24/7 on-call rotation to respond and support troubleshooting of issues in production.
➢ Assist the developers and on calls for other teams with post mortem, follow up and review of
issues affecting production availability.
➢ Establishing and enforcing systems monitoring tools and standards
➢ Establishing and enforcing Risk Assessment policies and standards
➢ Establishing and enforcing Escalation policies and standards







