Cutshort logo
Nagarro Software  logo
Associate Principal Engineer(Jitterbit Architect)
Associate Principal Engineer(Jitterbit Architect)
Nagarro Software 's logo

Associate Principal Engineer(Jitterbit Architect)

Nitika Kalra's profile picture
Posted by Nitika Kalra
9 - 13 yrs
Best in industry
Remote, Mumbai, Delhi, Gurugram, Noida, Chennai, Bengaluru (Bangalore), Hyderabad, Pune, Kolkata
Skills
jitterbit

šŸ‘‹šŸ¼We're Nagarro.

Ā 

We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (19000+ experts across 33 countries, to be exact). Our work culture is dynamic and non-hierarchical. We are looking for great new colleagues. That is where you come in!


REQUIREMENTS:

  • Bachelor's/masterā€™s degree or equivalent experience in computer science
  • Overall, 10-12 years of experience with at least 4 years of experience with Jitterbit Harmony platform and Jitterbit Cloud.
  • Should have the experience to technically lead groom developers who might be geographically distributed
  • Knowledge of Change & Incident Management process (JIRA etc.)

RESPONSIBILITIES:

  • Responsible for end-to-end implementation of integration use case using Jitterbit platform.
  • Coordinate with all the stakeholders for successful project execution.
  • Responsible for requirement gathering, Integration strategy, design, implementation etc.
  • Should have strong hands-on experience in designing, building, and deploying integration solution using Jitterbit harmony Platform.
  • Should have developed enterprise services using REST based APIs, SOAP Web Services and use of different Jitterbit connectors (Salesforce, DB, JMS, File connector, Http/Https connectors, any TMS connector).
  • Should have knowledge of Custom Jitterbit Plugins and Custom Connectors.
  • Experience in Jitterbit implementations including security, logging, error handling, scalability and clustering.
  • Strong experience in Jitterbit Script, XSLT and JavaScript.
  • Install, configure and deploy solution using Jitterbit.
  • Provide test support for bug fixes during all stages of test cycle.
  • Provide support for deployment and post go-live.
  • Knowledge of professional software engineering practices & best practices for the full software development life cycle including coding standards, code reviews, source control management, build processes, testing,
  • Understand the requirements, create necessary documentation, give presentations to clients and get necessary approvals and create design doc for the release.
  • Estimate the tasks and discuss with the clients on Risks/Issues.
  • Working on the specific module independently and test the application. Code reviews suggest the team on best practices.
  • Create necessary documentation, give presentations to clients and get necessary approvals.
  • Broad knowledge of web standards relating to APIs (OAuth, SSL, CORS, JWT, etc.)
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Nagarro Software

Founded :
1996
Type
Size :
5000+
Stage :
Profitable
About

šŸ‘‹šŸ¼We're Nagarro.

Ā 

We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (19000+ experts across 33 countries, to be exact). Our work culture is dynamic and non-hierarchical. We are looking for great new colleagues.

Read more
Tech Stack
skill iconJava
skill iconJavascript
skill iconNodeJS (Node.js)
Fullstack Developer
backend
frontend
Software Testing (QA)
DOTNET
MuleSoft
NOCODE
SAP
Hybris
Enterprise Resource Planning (ERP)
Salesforce
MS SharePoint
JIRA
Service delivery
Microsoft technologies
Emerging technologies
Oracle
Big Data
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Company video
Nagarro Software 's video section
Nagarro Software 's video section
Photos
Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures
Connect with the team
Profile picture
Garima Negi
Profile picture
Ashish Arora
Profile picture
Manohar Venugopal
Profile picture
Kavita Khurana
Profile picture
Jai Kumar
Profile picture
reetu chhajjal
Profile picture
Rajesh P
Profile picture
isha arora
Profile picture
Pinky Rustagi
Profile picture
Ravinder Kaur
Profile picture
Akanksha Badoni
Profile picture
Sunil Kanderi
Company social profiles
instagramlinkedintwitter

Similar jobs

Leading StartUp Focused On Employee Growth
Bengaluru (Bangalore)
4 - 8 yrs
ā‚¹25L - ā‚¹45L / yr
skill iconData Analytics
Data Analyst
Tableau
Mixpanel
CleverTap
+2 more
4+ years of experience in data and analytics.
ā— Knowledge of Excel,SQL and writing code in python.
ā— Experience with Reporting and Business Intelligence tools like Tableau, Metabase.
ā— Exposure with distributed analytics processing technologies is desired (e.g. Hive, Spark).
ā— Experience with Clevertap, Mixpanel, Amplitude, etc.
ā— Excellent communication skills.
ā— Background in market research and project management.
ā— Attention to detail.
ā— Problem-solving aptitude.
Read more
Bengaluru (Bangalore)
6 - 15 yrs
ā‚¹40L - ā‚¹90L / yr
skill iconData Science
skill iconDeep Learning
Data Scientist
skill iconMachine Learning (ML)
Artificial Neural Network (ANN)
+9 more

Responsibilities

  • Building out and manage a young data science vertical within the organization

  • Provide technical leadership in the areas of machine learning, analytics, and data sciences

  • Work with the team and create a roadmap to solve the companyā€™s requirements by solving data-mining, analytics, and ML problems by Identifying business problems that could be solved using Data Science and scoping it out end to end.

  • Solve business problems by applying advanced Machine Learning algorithms and complex statistical models on large volumes of data.

  • Develop heuristics, algorithms, and models to deanonymize entities on public blockchains

  • Data Mining - Extend the organizationā€™s proprietary dataset by introducing new data collection methods and by identifying new data sources.

  • Keep track of the latest trends in cryptocurrency usage on open-web and dark-web and develop counter-measures to defeat concealment techniques used by criminal actors.

  • Develop in-house algorithms to generate risk scores for blockchain transactions.

  • Work with data engineers to implement the results of your work.

  • Assemble large, complex data sets that meet functional / non-functional business requirements.

  • Build, scale and deploy holistic data science products after successful prototyping.

  • Clearly articulate and present recommendations to business partners, and influence future plans based on insights.

Ā 

Preferred Experience

Ā 

  • >8+ years of relevant experience as a Data Scientist or Analyst. A few years of work experience solving NLP problems or other ML problems is a plus

  • Must have previously managed a team of at least 5 data scientists or analysts or demonstrate that they have prior experience in scaling a data science function from the groundĀ 

  • Good understanding of python, bash scripting, and basic cloud platform skills (on GCP or AWS)

  • Excellent communication skills and analytical skills

What youā€™ll get

  • Work closely with the Founders in helping grow the organization to the next level alongside some of the best and brightest talents around you

  • An excellent culture, we encourage collaboration, growth, and learning amongst the team

  • Competitive salary and equity

  • An autonomous and flexible role where you will be trusted with key tasks.

  • An opportunity to have a real impact and be part of a company with purpose.

Read more
Mumbai, Navi Mumbai
6 - 14 yrs
ā‚¹16L - ā‚¹37L / yr
skill iconPython
PySpark
Data engineering
Big Data
Hadoop
+3 more

Role: Principal Software Engineer


We looking for a passionate Principle Engineer - Analytics to build data products that extract valuable business insights for efficiency and customer experience. This role will require managing, processing and analyzing large amounts of raw information and in scalable databases. This will also involve developing unique data structures and writing algorithms for the entirely new set of products. The candidate will be required to have critical thinking and problem-solving skills. The candidates must be experienced with software development with advanced algorithms and must be able to handle large volume of data. Exposure with statistics and machine learning algorithms is a big plus. The candidate should have some exposure to cloud environment, continuous integration and agile scrum processes.



Responsibilities:


ā€¢ Lead projects both as a principal investigator and project manager, responsible for meeting project requirements on schedule

ā€¢ Software Development that creates data driven intelligence in the products which deals with Big Data backends

ā€¢ Exploratory analysis of the data to be able to come up with efficient data structures and algorithms for given requirements

ā€¢ The system may or may not involve machine learning models and pipelines but will require advanced algorithm development

ā€¢ Managing, data in large scale data stores (such as NoSQL DBs, time series DBs, Geospatial DBs etc.)

ā€¢ Creating metrics and evaluation of algorithm for better accuracy and recall

ā€¢ Ensuring efficient access and usage of data through the means of indexing, clustering etc.

ā€¢ Collaborate with engineering and product development teams.


Requirements:


ā€¢ Masterā€™s or Bachelorā€™s degree in Engineering in one of these domains - Computer Science, Information Technology, Information Systems, or related field from top-tier school

ā€¢ OR Masterā€™s degree or higher in Statistics, Mathematics, with hands on background in software development.

ā€¢ Experience of 8 to 10 year with product development, having done algorithmic work

ā€¢ 5+ years of experience working with large data sets or do large scale quantitative analysis

ā€¢ Understanding of SaaS based products and services.

ā€¢ Strong algorithmic problem-solving skills

ā€¢ Able to mentor and manage team and take responsibilities of team deadline.


Skill set required:


ā€¢ In depth Knowledge Python programming languages

ā€¢ Understanding of software architecture and software design

ā€¢ Must have fully managed a project with a team

ā€¢ Having worked with Agile project management practices

ā€¢ Experience with data processing analytics and visualization tools in Python (such as pandas, matplotlib, Scipy, etc.)

ā€¢ Strong understanding of SQL and querying to NoSQL database (eg. Mongo, Casandra, Redis

Read more
Inviz Ai Solutions Private Limited
Shridhar Nayak
Posted by Shridhar Nayak
Bengaluru (Bangalore)
4 - 8 yrs
Best in industry
Spark
Hadoop
Big Data
Data engineering
PySpark
+8 more

InVizĀ is Bangalore Based Startup helping Enterprises simplifying the Search and Discovery experiences for both their end customers as well as their internal users. We use state-of-the-art technologies in Computer Vision, Natural Language Processing, Text Mining, and other ML techniques to extract information/concepts from data of different formats- text, images, videos and make them easily discoverable through simple human-friendly touchpoints.Ā 

Ā 

TSDE - DataĀ 

Data Engineer:

Ā 

  • Should have total 3-6 Yrs of experience in Data Engineering.
  • Person should have experience in coding data pipeline on GCP.Ā 
  • Prior experience on Hadoop systems is ideal as candidate may not have total GCP experience.Ā 
  • Strong on programming languages like Scala, Python, Java.Ā 
  • Good understanding of various data storage formats and itā€™s advantages.Ā 
  • Should have exposure on GCP tools to develop end to end data pipeline for various scenarios (including ingesting data from traditional data bases as well as integration of API based data sources).Ā 
  • Should have Business mindset to understand data and how it will be used for BI and Analytics purposes.Ā 
  • Data Engineer Certification preferredĀ 

Ā 

Experience in Working with GCP tools like

Ā 
Ā 

Store :Ā  CloudSQL , Cloud Storage, Cloud Bigtable,Ā  Bigquery, Cloud Spanner, Cloud DataStore

Ā 

Ingest :Ā  Stackdriver, Pub/Sub, AppEngine, Kubernete Engine, Kafka, DataPrep , Micro services

Ā 

Schedule : Cloud Composer

Ā 

Processing: Cloud Dataproc, Cloud Dataflow, Cloud Dataprep

Ā 

CI/CD - Bitbucket+Jenkinjs / Gitlab

Ā 

Atlassian Suite

Ā 

Ā 

Ā 

Ā .
Read more
TreQ
at TreQ
Nidhi Tiwari
Posted by Nidhi Tiwari
Mumbai
2 - 5 yrs
ā‚¹7L - ā‚¹12L / yr
ETL
Informatica
Data Warehouse (DWH)
skill iconAmazon Web Services (AWS)
skill iconPostgreSQL
+1 more

Responsibilities :


  • Involve in planning, design, development and maintenance of large-scale data repositories, pipelines, analytical solutions and knowledge management strategy
  • Build and maintain optimal data pipeline architecture to ensure scalability, connect operational systems data for analytics and business intelligence (BI) systems
  • Build data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader
  • Reporting and obtaining insights from large data chunks on import/export and communicating relevant pointers for helping in decision-making
  • Preparation, analysis, and presentation of reports to the management for further developmental activities
  • Anticipate, identify and solve issues concerning data management to improve data quality


Requirements :


  • Ability to build and maintain ETL pipelinesĀ 
  • Technical Business Analysis experience and hands-on experience developing functional spec
  • Good understanding of Data Engineering principles including data modeling methodologies
  • Sound understanding of PostgreSQL
  • Strong analytical and interpersonal skills as well as reporting capabilities
Read more
Hy-Vee
Bengaluru (Bangalore)
5 - 10 yrs
ā‚¹15L - ā‚¹33L / yr
ETL
Informatica
Data Warehouse (DWH)
skill iconPython
skill iconGit
+4 more

Technical & Business Expertise:

-Hands on integration experience in SSIS/Mulesoft
- Hands on experience Azure Synapse
- Proven advanced level of writing database experience in SQL Server
- Proven advanced level of understanding about Data Lake
- Proven intermediate level of writing Python or similar programming language
- Intermediate understanding of Cloud Platforms (GCP)Ā 
- Intermediate understanding of Data Warehousing
- Advanced Understanding of Source Control (Github)

Read more
Technovert
at Technovert
12 recruiters
Dushyant Waghmare
Posted by Dushyant Waghmare
Hyderabad
5 - 8 yrs
ā‚¹12.5L - ā‚¹24L / yr
ETL
Informatica
Data Warehouse (DWH)

Role: ODI Developer

Location: Hyderabad (Initially remote)

Experience: 5-8 Years

Ā 

TechnovertĀ is not your typical IT services firm. We have to credit two of our successful products generating $2M+ in licensing/SaaS revenues which is rare in the industry.

We are Obsessed with our love for technology and the infinite possibilities it can create for making this world a better place. Our clients find us at our best when we are challenged with their toughest of problems and we love chasing the problems. It thrills us and motivates us to deliver more. Our global delivery model has earned the trust and reputation of being a partner of choice.

We have a strong heritage built on great people who put customers first and deliver exceptional results with no surprises - every time. We partner with you to understand the interconnection of user experience, business goals, and information technology. It's the optimal fusing of these three drivers that deliver.

Ā 

Must have:

  • Experience with DWH Implementation experience, with experience in developing ETL processes - ETL control tables, error logging, auditing, data quality, etc.
  • Responsible for creation of ELT maps, Migrations into different environments, Maintenance and Monitoring of the infrastructure, working with DBA's as well as creation of new reports to assist executive and managerial levels in analyzing the business needs to target the customers.
  • Should be able to implement reusability, parameterization, workflow design, etc.
  • Expertise in the Oracle ODI toolset and OAC & knowledge of ODI Master and work repository &data modeling and ETL design.
  • Used ODI Topology Manager to create connections to various technologies such as Oracle, SQL Server, Flat files, XML, etc.
  • Using ODI mappings, error handling, automation using ODI, Load plans, Migration of Objects.
  • Ability to design ETL unit test cases and debug ETL Mappings, expertise in developing Load Plans, Scheduling Jobs.
  • Integrate ODI with multiple Sources/targets.

Ā 

Nice to have:

  • Exposure towards Oracle Cloud Infrastructure (OCI) is preferable.
  • Knowledge in Oracle Analytics Cloud to Explore data through visualizations, load, and model data.
  • Hands-on experience of ODI 12c would be an added advantage.

Ā 

Qualification:

  • Overall 3+ years ofĀ experience in Oracle Data Integrator (ODI) and Oracle Data Integrator Cloud Service (ODICS).
  • Experience in designing and implementing the E-LT architecture that is required to build a data warehouse, including source-to-staging area, staging-to-target area, data transformations, and EL-T process flows.
  • Must be well versed and hands-on in using and customizing Knowledge Modules (KM) and experience of performance tuning of mappings.
  • Must be self-starting, have strong attention to detail and accuracy, and able to fill multiple roles within the Oracle environment.
  • Should be good with Oracle/SQL and should have a good understanding of DDL Deployments.

Ā 

Read more
Ganit Business Solutions
at Ganit Business Solutions
3 recruiters
Vijitha VS
Posted by Vijitha VS
Remote only
4 - 7 yrs
ā‚¹10L - ā‚¹30L / yr
skill iconScala
ETL
Informatica
Data Warehouse (DWH)
Big Data
+4 more

Job Description:

We are looking for a Big Data Engineer who have worked across the entire ETL stack. Someone who has ingested data in a batch and live stream format, transformed large volumes of daily and built Data-warehouse to store the transformed data and has integrated different visualization dashboards and applications with the data stores.Ā Ā Ā  The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them.

Responsibilities:

  • Develop, test, and implement data solutions based on functional / non-functional business requirements.
  • You would be required to code in Scala and PySpark daily on Cloud as well as on-prem infrastructure
  • Build Data Models to store the data in a most optimized manner
  • Identify, design, and implement process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Implementing the ETL process and optimal data pipeline architecture
  • Monitoring performance and advising any necessary infrastructure changes.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics experts to strive for greater functionality in our data systems.
  • Proactively identify potential production issues and recommend and implement solutions
  • Must be able to write quality code and build secure, highly available systems.
  • Create design documents that describe the functionality, capacity, architecture, and process.
  • Review peer-codes and pipelines before deploying to Production for optimization issues and code standards

Skill Sets:

  • Good understanding of optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and ā€˜big dataā€™ technologies.
  • Proficient understanding of distributed computing principles
  • Experience in working with batch processing/ real-time systems using various open-source technologies like NoSQL, Spark, Pig, Hive, Apache Airflow.
  • Implemented complex projects dealing with the considerable data size (PB).
  • Optimization techniques (performance, scalability, monitoring, etc.)
  • Experience with integration of data from multiple data sources
  • Experience with NoSQL databases, such as HBase, Cassandra, MongoDB, etc.,
  • Knowledge of various ETL techniques and frameworks, such as Flume
  • Experience with various messaging systems, such as Kafka or RabbitMQ
  • Creation of DAGs for data engineering
  • Expert at Python /Scala programming, especially for data engineering/ ETL purposes

Ā 

Ā 

Ā 

Read more
Fragma Data Systems
at Fragma Data Systems
8 recruiters
Evelyn Charles
Posted by Evelyn Charles
Remote only
1.5 - 5 yrs
ā‚¹8L - ā‚¹15L / yr
PySpark
SQL
ā€¢ Responsible for developing and maintaining applications with PySparkĀ 
ā€¢ Contribute to the overall design and architecture of the application developed and deployed.
ā€¢ Performance Tuning wrt to executor sizing and other environmental parameters, code optimization, partitions tuning, etc.
ā€¢ Interact with business users to understand requirements and troubleshoot issues.
ā€¢ Implement Projects based on functional specifications.

Must Have Skills:
ā€¢ Good experience in Pyspark - Including Dataframe core functions and Spark SQL
ā€¢ Good experience in SQL DBs - Be able to write queries including fair complexity.
ā€¢ Should have excellent experience in Big Data programming for data transformation and aggregations
ā€¢ Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
ā€¢ Good customer communication.
ā€¢ Good Analytical skills
Read more
FarmGuide
at FarmGuide
1 recruiter
Anupam Arya
Posted by Anupam Arya
NCR (Delhi | Gurgaon | Noida)
0 - 8 yrs
ā‚¹7L - ā‚¹14L / yr
Computer Security
Image processing
OpenCV
skill iconPython
Rational ClearCase
+8 more
FarmGuide is a data driven tech startup aiming towards digitizing the periodic processes in place and bringing information symmetry in agriculture supply chain through transparent, dynamic & interactive software solutions. We, at FarmGuide (https://angel.co/farmguide) help Government in relevant and efficient policy making by ensuring seamless flow of information between stakeholders.Job Description :We are looking for individuals who want to help us design cutting edge scalable products to meet our rapidly growing business. We are building out the data science team and looking to hire across levels.- Solving complex problems in the agri-tech sector, which are long-standing open problems at the national level.- Applying computer vision techniques to satellite imagery to deduce artefacts of interest.- Applying various machine learning techniques to digitize existing physical corpus of knowledge in the sector.Key Responsibilities :- Develop computer vision algorithms for production use on satellite and aerial imagery- Implement models and data pipelines to analyse terabytes of data.- Deploy built models in production environment.- Develop tools to assess algorithm accuracy- Implement algorithms at scale in the commercial cloudSkills Required :- B.Tech/ M.Tech in CS or other related fields such as EE or MCA from IIT/NIT/BITS but not compulsory. - Demonstrable interest in Machine Learning and Computer Vision, such as coursework, open-source contribution, etc.- Experience with digital image processing techniques - Familiarity/Experience with geospatial, planetary, or astronomical datasets is valuable- Experience in writing algorithms to manipulate geospatial data- Hands-on knowledge of GDAL or open-source GIS tools is a plus- Familiarity with cloud systems (AWS/Google Cloud) and cloud infrastructure is a plus- Experience with high performance or large scale computing infrastructure might be helpful- Coding ability in R or Python. - Self-directed team player who thrives in a continually changing environmentWhat is on offer :- High impact role in a young start up with colleagues from IITs and other Tier 1 colleges- Chance to work on the cutting edge of ML (yes, we do train Neural Nets on GPUs) - Lots of freedom in terms of the work you do and how you do it - Flexible timings - Best start-up salary in industry with additional tax benefits
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos