šš¼We're Nagarro.
Ā
We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (19000+ experts across 33 countries, to be exact). Our work culture is dynamic and non-hierarchical. We are looking for great new colleagues. That is where you come in!
REQUIREMENTS:
- Bachelor's/masterās degree or equivalent experience in computer science
- Overall, 10-12 years of experience with at least 4 years of experience with Jitterbit Harmony platform and Jitterbit Cloud.
- Should have the experience to technically lead groom developers who might be geographically distributed
- Knowledge of Change & Incident Management process (JIRA etc.)
RESPONSIBILITIES:
- Responsible for end-to-end implementation of integration use case using Jitterbit platform.
- Coordinate with all the stakeholders for successful project execution.
- Responsible for requirement gathering, Integration strategy, design, implementation etc.
- Should have strong hands-on experience in designing, building, and deploying integration solution using Jitterbit harmony Platform.
- Should have developed enterprise services using REST based APIs, SOAP Web Services and use of different Jitterbit connectors (Salesforce, DB, JMS, File connector, Http/Https connectors, any TMS connector).
- Should have knowledge of Custom Jitterbit Plugins and Custom Connectors.
- Experience in Jitterbit implementations including security, logging, error handling, scalability and clustering.
- Strong experience in Jitterbit Script, XSLT and JavaScript.
- Install, configure and deploy solution using Jitterbit.
- Provide test support for bug fixes during all stages of test cycle.
- Provide support for deployment and post go-live.
- Knowledge of professional software engineering practices & best practices for the full software development life cycle including coding standards, code reviews, source control management, build processes, testing,
- Understand the requirements, create necessary documentation, give presentations to clients and get necessary approvals and create design doc for the release.
- Estimate the tasks and discuss with the clients on Risks/Issues.
- Working on the specific module independently and test the application. Code reviews suggest the team on best practices.
- Create necessary documentation, give presentations to clients and get necessary approvals.
- Broad knowledge of web standards relating to APIs (OAuth, SSL, CORS, JWT, etc.)
About Nagarro Software
šš¼We're Nagarro.
Ā
We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (19000+ experts across 33 countries, to be exact). Our work culture is dynamic and non-hierarchical. We are looking for great new colleagues.
Similar jobs
ā Knowledge of Excel,SQL and writing code in python.
ā Experience with Reporting and Business Intelligence tools like Tableau, Metabase.
ā Exposure with distributed analytics processing technologies is desired (e.g. Hive, Spark).
ā Experience with Clevertap, Mixpanel, Amplitude, etc.
ā Excellent communication skills.
ā Background in market research and project management.
ā Attention to detail.
ā Problem-solving aptitude.
Responsibilities
-
Building out and manage a young data science vertical within the organization
-
Provide technical leadership in the areas of machine learning, analytics, and data sciences
-
Work with the team and create a roadmap to solve the companyās requirements by solving data-mining, analytics, and ML problems by Identifying business problems that could be solved using Data Science and scoping it out end to end.
-
Solve business problems by applying advanced Machine Learning algorithms and complex statistical models on large volumes of data.
-
Develop heuristics, algorithms, and models to deanonymize entities on public blockchains
-
Data Mining - Extend the organizationās proprietary dataset by introducing new data collection methods and by identifying new data sources.
-
Keep track of the latest trends in cryptocurrency usage on open-web and dark-web and develop counter-measures to defeat concealment techniques used by criminal actors.
-
Develop in-house algorithms to generate risk scores for blockchain transactions.
-
Work with data engineers to implement the results of your work.
-
Assemble large, complex data sets that meet functional / non-functional business requirements.
-
Build, scale and deploy holistic data science products after successful prototyping.
-
Clearly articulate and present recommendations to business partners, and influence future plans based on insights.
Ā
Preferred Experience
Ā
-
>8+ years of relevant experience as a Data Scientist or Analyst. A few years of work experience solving NLP problems or other ML problems is a plus
-
Must have previously managed a team of at least 5 data scientists or analysts or demonstrate that they have prior experience in scaling a data science function from the groundĀ
-
Good understanding of python, bash scripting, and basic cloud platform skills (on GCP or AWS)
-
Excellent communication skills and analytical skills
What youāll get
-
Work closely with the Founders in helping grow the organization to the next level alongside some of the best and brightest talents around you
-
An excellent culture, we encourage collaboration, growth, and learning amongst the team
-
Competitive salary and equity
-
An autonomous and flexible role where you will be trusted with key tasks.
-
An opportunity to have a real impact and be part of a company with purpose.
Role: Principal Software Engineer
We looking for a passionate Principle Engineer - Analytics to build data products that extract valuable business insights for efficiency and customer experience. This role will require managing, processing and analyzing large amounts of raw information and in scalable databases. This will also involve developing unique data structures and writing algorithms for the entirely new set of products. The candidate will be required to have critical thinking and problem-solving skills. The candidates must be experienced with software development with advanced algorithms and must be able to handle large volume of data. Exposure with statistics and machine learning algorithms is a big plus. The candidate should have some exposure to cloud environment, continuous integration and agile scrum processes.
Responsibilities:
ā¢ Lead projects both as a principal investigator and project manager, responsible for meeting project requirements on schedule
ā¢ Software Development that creates data driven intelligence in the products which deals with Big Data backends
ā¢ Exploratory analysis of the data to be able to come up with efficient data structures and algorithms for given requirements
ā¢ The system may or may not involve machine learning models and pipelines but will require advanced algorithm development
ā¢ Managing, data in large scale data stores (such as NoSQL DBs, time series DBs, Geospatial DBs etc.)
ā¢ Creating metrics and evaluation of algorithm for better accuracy and recall
ā¢ Ensuring efficient access and usage of data through the means of indexing, clustering etc.
ā¢ Collaborate with engineering and product development teams.
Requirements:
ā¢ Masterās or Bachelorās degree in Engineering in one of these domains - Computer Science, Information Technology, Information Systems, or related field from top-tier school
ā¢ OR Masterās degree or higher in Statistics, Mathematics, with hands on background in software development.
ā¢ Experience of 8 to 10 year with product development, having done algorithmic work
ā¢ 5+ years of experience working with large data sets or do large scale quantitative analysis
ā¢ Understanding of SaaS based products and services.
ā¢ Strong algorithmic problem-solving skills
ā¢ Able to mentor and manage team and take responsibilities of team deadline.
Skill set required:
ā¢ In depth Knowledge Python programming languages
ā¢ Understanding of software architecture and software design
ā¢ Must have fully managed a project with a team
ā¢ Having worked with Agile project management practices
ā¢ Experience with data processing analytics and visualization tools in Python (such as pandas, matplotlib, Scipy, etc.)
ā¢ Strong understanding of SQL and querying to NoSQL database (eg. Mongo, Casandra, Redis
InVizĀ is Bangalore Based Startup helping Enterprises simplifying the Search and Discovery experiences for both their end customers as well as their internal users. We use state-of-the-art technologies in Computer Vision, Natural Language Processing, Text Mining, and other ML techniques to extract information/concepts from data of different formats- text, images, videos and make them easily discoverable through simple human-friendly touchpoints.Ā
Ā
TSDE - DataĀ
Data Engineer:
Ā
- Should have total 3-6 Yrs of experience in Data Engineering.
- Person should have experience in coding data pipeline on GCP.Ā
- Prior experience on Hadoop systems is ideal as candidate may not have total GCP experience.Ā
- Strong on programming languages like Scala, Python, Java.Ā
- Good understanding of various data storage formats and itās advantages.Ā
- Should have exposure on GCP tools to develop end to end data pipeline for various scenarios (including ingesting data from traditional data bases as well as integration of API based data sources).Ā
- Should have Business mindset to understand data and how it will be used for BI and Analytics purposes.Ā
- Data Engineer Certification preferredĀ
Ā
Experience in Working with GCP tools like |
Ā |
Ā |
Store :Ā CloudSQL , Cloud Storage, Cloud Bigtable,Ā Bigquery, Cloud Spanner, Cloud DataStore |
Ā |
Ingest :Ā Stackdriver, Pub/Sub, AppEngine, Kubernete Engine, Kafka, DataPrep , Micro services |
Ā |
Schedule : Cloud Composer |
Ā |
Processing: Cloud Dataproc, Cloud Dataflow, Cloud Dataprep |
Ā |
CI/CD - Bitbucket+Jenkinjs / Gitlab |
Ā |
Atlassian Suite |
Ā |
Ā |
Ā
Ā .Responsibilities :
- Involve in planning, design, development and maintenance of large-scale data repositories, pipelines, analytical solutions and knowledge management strategy
- Build and maintain optimal data pipeline architecture to ensure scalability, connect operational systems data for analytics and business intelligence (BI) systems
- Build data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader
- Reporting and obtaining insights from large data chunks on import/export and communicating relevant pointers for helping in decision-making
- Preparation, analysis, and presentation of reports to the management for further developmental activities
- Anticipate, identify and solve issues concerning data management to improve data quality
Requirements :
- Ability to build and maintain ETL pipelinesĀ
- Technical Business Analysis experience and hands-on experience developing functional spec
- Good understanding of Data Engineering principles including data modeling methodologies
- Sound understanding of PostgreSQL
- Strong analytical and interpersonal skills as well as reporting capabilities
Technical & Business Expertise:
-Hands on integration experience in SSIS/Mulesoft
- Hands on experience Azure Synapse
- Proven advanced level of writing database experience in SQL Server
- Proven advanced level of understanding about Data Lake
- Proven intermediate level of writing Python or similar programming language
- Intermediate understanding of Cloud Platforms (GCP)Ā
- Intermediate understanding of Data Warehousing
- Advanced Understanding of Source Control (Github)
Role: ODI Developer
Location: Hyderabad (Initially remote)
Experience: 5-8 Years
Ā
TechnovertĀ is not your typical IT services firm. We have to credit two of our successful products generating $2M+ in licensing/SaaS revenues which is rare in the industry.
We are Obsessed with our love for technology and the infinite possibilities it can create for making this world a better place. Our clients find us at our best when we are challenged with their toughest of problems and we love chasing the problems. It thrills us and motivates us to deliver more. Our global delivery model has earned the trust and reputation of being a partner of choice.
We have a strong heritage built on great people who put customers first and deliver exceptional results with no surprises - every time. We partner with you to understand the interconnection of user experience, business goals, and information technology. It's the optimal fusing of these three drivers that deliver.
Ā
Must have:
- Experience with DWH Implementation experience, with experience in developing ETL processes - ETL control tables, error logging, auditing, data quality, etc.
- Responsible for creation of ELT maps, Migrations into different environments, Maintenance and Monitoring of the infrastructure, working with DBA's as well as creation of new reports to assist executive and managerial levels in analyzing the business needs to target the customers.
- Should be able to implement reusability, parameterization, workflow design, etc.
- Expertise in the Oracle ODI toolset and OAC & knowledge of ODI Master and work repository &data modeling and ETL design.
- Used ODI Topology Manager to create connections to various technologies such as Oracle, SQL Server, Flat files, XML, etc.
- Using ODI mappings, error handling, automation using ODI, Load plans, Migration of Objects.
- Ability to design ETL unit test cases and debug ETL Mappings, expertise in developing Load Plans, Scheduling Jobs.
- Integrate ODI with multiple Sources/targets.
Ā
Nice to have:
- Exposure towards Oracle Cloud Infrastructure (OCI) is preferable.
- Knowledge in Oracle Analytics Cloud to Explore data through visualizations, load, and model data.
- Hands-on experience of ODI 12c would be an added advantage.
Ā
Qualification:
- Overall 3+ years ofĀ experience in Oracle Data Integrator (ODI) and Oracle Data Integrator Cloud Service (ODICS).
- Experience in designing and implementing the E-LT architecture that is required to build a data warehouse, including source-to-staging area, staging-to-target area, data transformations, and EL-T process flows.
- Must be well versed and hands-on in using and customizing Knowledge Modules (KM) and experience of performance tuning of mappings.
- Must be self-starting, have strong attention to detail and accuracy, and able to fill multiple roles within the Oracle environment.
- Should be good with Oracle/SQL and should have a good understanding of DDL Deployments.
Ā
Job Description:
We are looking for a Big Data Engineer who have worked across the entire ETL stack. Someone who has ingested data in a batch and live stream format, transformed large volumes of daily and built Data-warehouse to store the transformed data and has integrated different visualization dashboards and applications with the data stores.Ā Ā Ā The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them.
Responsibilities:
- Develop, test, and implement data solutions based on functional / non-functional business requirements.
- You would be required to code in Scala and PySpark daily on Cloud as well as on-prem infrastructure
- Build Data Models to store the data in a most optimized manner
- Identify, design, and implement process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Implementing the ETL process and optimal data pipeline architecture
- Monitoring performance and advising any necessary infrastructure changes.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Proactively identify potential production issues and recommend and implement solutions
- Must be able to write quality code and build secure, highly available systems.
- Create design documents that describe the functionality, capacity, architecture, and process.
- Review peer-codes and pipelines before deploying to Production for optimization issues and code standards
Skill Sets:
- Good understanding of optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and ābig dataā technologies.
- Proficient understanding of distributed computing principles
- Experience in working with batch processing/ real-time systems using various open-source technologies like NoSQL, Spark, Pig, Hive, Apache Airflow.
- Implemented complex projects dealing with the considerable data size (PB).
- Optimization techniques (performance, scalability, monitoring, etc.)
- Experience with integration of data from multiple data sources
- Experience with NoSQL databases, such as HBase, Cassandra, MongoDB, etc.,
- Knowledge of various ETL techniques and frameworks, such as Flume
- Experience with various messaging systems, such as Kafka or RabbitMQ
- Creation of DAGs for data engineering
- Expert at Python /Scala programming, especially for data engineering/ ETL purposes
Ā
Ā
Ā
Must Have Skills: