Data Engineer

at Codalyze Technologies

DP
Posted by Aishwarya Hire
icon
Mumbai
icon
3 - 7 yrs
icon
₹7L - ₹20L / yr
icon
Full time
Skills
Hadoop
Big Data
Scala
Spark
Amazon Web Services (AWS)
Java
Python
Apache Hive
Job Overview :

Your mission is to help lead team towards creating solutions that improve the way our business is run. Your knowledge of design, development, coding, testing and application programming will help your team raise their game, meeting your standards, as well as satisfying both business and functional requirements. Your expertise in various technology domains will be counted on to set strategic direction and solve complex and mission critical problems, internally and externally. Your quest to embracing leading-edge technologies and methodologies inspires your team to follow suit.

Responsibilities and Duties :

- As a Data Engineer you will be responsible for the development of data pipelines for numerous applications handling all kinds of data like structured, semi-structured &
unstructured. Having big data knowledge specially in Spark & Hive is highly preferred.

- Work in team and provide proactive technical oversight, advice development teams fostering re-use, design for scale, stability, and operational efficiency of data/analytical solutions

Education level :

- Bachelor's degree in Computer Science or equivalent

Experience :

- Minimum 5+ years relevant experience working on production grade projects experience in hands on, end to end software development

- Expertise in application, data and infrastructure architecture disciplines

- Expert designing data integrations using ETL and other data integration patterns

- Advanced knowledge of architecture, design and business processes

Proficiency in :

- Modern programming languages like Java, Python, Scala

- Big Data technologies Hadoop, Spark, HIVE, Kafka

- Writing decently optimized SQL queries

- Orchestration and deployment tools like Airflow & Jenkins for CI/CD (Optional)

- Responsible for design and development of integration solutions with Hadoop/HDFS, Real-Time Systems, Data Warehouses, and Analytics solutions

- Knowledge of system development lifecycle methodologies, such as waterfall and AGILE.

- An understanding of data architecture and modeling practices and concepts including entity-relationship diagrams, normalization, abstraction, denormalization, dimensional
modeling, and Meta data modeling practices.

- Experience generating physical data models and the associated DDL from logical data models.

- Experience developing data models for operational, transactional, and operational reporting, including the development of or interfacing with data analysis, data mapping,
and data rationalization artifacts.

- Experience enforcing data modeling standards and procedures.

- Knowledge of web technologies, application programming languages, OLTP/OLAP technologies, data strategy disciplines, relational databases, data warehouse development and Big Data solutions.

- Ability to work collaboratively in teams and develop meaningful relationships to achieve common goals

Skills :

Must Know :

- Core big-data concepts

- Spark - PySpark/Scala

- Data integration tool like Pentaho, Nifi, SSIS, etc (at least 1)

- Handling of various file formats

- Cloud platform - AWS/Azure/GCP

- Orchestration tool - Airflow
Read more

About Codalyze Technologies

Tech agency providing web and mobile app development for android and iOS using ReactJS, React Native, NodeJS.
Read more
Founded
2016
Type
Products & Services
Size
20-100 employees
Stage
Profitable
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Engineer (SnowFlake)

at a reputed firm providing world-class consulting Company

Agency job
via Jobdost
Snow flake schema
Amazon Web Services (AWS)
AWS Lambda
ETL
Informatica
Data Warehouse (DWH)
icon
Ahmedabad, Hyderabad, Pune, Delhi
icon
5 - 8 yrs
icon
₹25L - ₹30L / yr

Data Engineer 

 

Mandatory Requirements 

  • Expertise in ETL , SNowFlake
  • Experience in AWS ETL using AWS Glue, AWS Lambda
  • Proficient in blob storage and data lake 
  • Understanding of file-based ingestion best practices. 

CORE RESPONSIBILITIES

  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies 
  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform 
  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations 
  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data.
  • Define process improvement opportunities to optimize data collection, insights and displays.
  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible 
  • Identify and interpret trends and patterns from complex data sets 
  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. 
  • Key participant in regular Scrum ceremonies with the agile teams  
  • Proficient at developing queries, writing reports and presenting findings 
  • Mentor junior members and bring best industry practices 

 

QUALIFICATIONS

  • 5-7+ years’ experience as data engineer in consumer finance or manufacturing or Oil & Gas industry 
  • Strong background in math, statistics, computer science, data science or related discipline
  • Advanced knowledge one of language Python, R, C# 
  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Azure, Docker / Kubernetes, SQL Server, Synapse, Snowflake,AWS
  • Proficient with
    • Data mining/programming tools (e.g. SAS, SQL, R, Python)
    • Database technologies (e.g. MongoDB, PostgreSQL, Redshift, Snowflake. and Greenplum)
    • Data visualization (e.g. Tableau, PowerBI, QlikSense)
  • Comfortable learning about and deploying new technologies and tools. 
  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines. 
  • Good written and oral communication skills and ability to present results to non-technical audiences 
  • Knowledge of business intelligence and analytical tools, technologies and techniques.

 

Read more
Job posted by
Saida Jabbar

Senior Big Data Engineer- Java, spark & cloud

at Cactus Communications

Founded 2002  •  Product  •  1000-5000 employees  •  Profitable
PySpark
Data engineering
Big Data
Hadoop
Spark
Java
Amazon Web Services (AWS)
icon
Remote only
icon
4 - 7 yrs
icon
Best in industry

Please note - This is a 100% remote opportunity and you can work from any location.

 

About the team:

You will be a part of Cactus Labs which is the R&D Cell of Cactus Communications. Cactus Labs is a high impact cell that works to solve complex technical and business problems that help keep us strategically competitive in the industry. We are a multi-cultural team spread across multiple countries. We work in the domain of AI/ML especially with Text (NLP - Natural Language Processing), Language Understanding, Explainable AI, Big Data, AR/VR etc.

 

The opportunity: Within Cactus Labs you will work with the Big Data team. This team manages Terabytes of data coming from different sources. We are re-orchestrating data pipelines to handle this data at scale and improve visibility and robustness. We operate across all the three Cloud Platforms and leverage the best of them.

 

In this role, you will get to own a component end to end. You will also get to work on could platform and learn to design distributed data processing systems to operate at scale.

 

Responsibilities:

  • Build and maintain robust data processing pipelines at scale
  • Collaborate with a team of Big Data Engineers, Big Data and Cloud Architects and Domain SMEs to drive the product ahead
  • Follow best practices in building and optimize existing processes
  • Stay up to date with the progress in the domain since we work on cutting-edge technologies and are constantly trying new things out
  • Build solutions for massive scale. This requires extensive benchmarking to pick the right approach
  • Understand the data in and out and make sense of it. You will at times need to draw conclusions and present it to the business users
  • Be independent, self-driven and highly motivated. While you will have the best people to learn from and access to various courses or training materials, we expect you to take charge of your growth and learning.

 

Expectations from you:

  • 4-7 Years of relevant experience in Big Data with Java
  • Highly proficient in distributed computing and Big Data Ecosystem - Hadoop, HDFS, Apache Spark
  • Good understanding of data lake and their importance in a Big Data Ecosystem
  • Being able to mentor junior team members and review their code
  • Experience in working in a Cloud Environment (AWS, Azure or GCP)
  • You like to work without a lot of supervision or micromanagement.
  • Above all, you get excited by data. You like to dive deep, mine patterns and draw conclusions. You believe in making data driven decisions and helping the team look for the pattern as well.

 

Preferred skills:

  • Familiarity with search engines like Elasticsearch and Bigdata warehouses systems like AWS Athena, Google Big Query etc
  • Building data pipelines using Airflow
  • Experience of working in AWS Cloud Environment.
Read more
Job posted by
Hemal Kamble

Deep Learning Engineer

at TIGI HR Solution Pvt. Ltd.

Founded 2014  •  Services  •  employees  •  Profitable
Python
C++
CUDA
TensorFlow
PyTorch
Linux administration
OpenCV
ROS
Deep Learning
icon
Bengaluru (Bangalore)
icon
3 - 5 yrs
icon
₹7L - ₹15L / yr

About the role:

Looking for an engineer to apply Deep Learning algorithms to implement and improve perception algorithms related to Autonomous vehicles. The position requires you to work on the full life-cycle of Deep learning development including data collection, feature engineering, model training, and testing. One will have the opportunity to implement a state-of-the-art Deep learning algorithm and apply it to real end-to-end production. You will be working with the team and team lead on challenging Deep Learning projects to deliver product quality improvements.

Responsibilities:

  • Build novel architectures for classifying, detecting, and tracking objects.
  • Develop efficient Deep Learning architectures that can run in real-time on NVIDIA devices.
  • Optimize the stack for deployment on embedded devices
  • Work on Data pipeline – Data Acquisition, pre-processing, and analysis.
  •  

Skillsets:

  • Languages: C++, Python.
  • Frameworks: CUDA, TensorRT, Pytorch, Tensorflow, ONNX.
  • Good understanding of Linux and Version Control (Git, GitHub, GitLab).
  • Experienced with OpenCV, Deep Learning to solve image domain problems.
  • Strong understanding of ROS.
  • Skilled with software design, development, and bug-fixing.
  • Coordinate with team members for the development and maintenance of the package.
  • Strong mathematical skills and understanding of probabilistic techniques.
  • Experience handling large data sets efficiently.
  • Experience with deploying Deep Learning models for real-time applications on Nvidia platforms like Drive AGX Pegasus, Jetson AGX Xavier, etc.


Add On Skills:

  • Frameworks: Pytorch Lighting
  • Experience with autonomous robots
  • OpenCV projects, Deep Learning projects
  • Experience with 3D data and representations (point clouds, meshes, etc.)
  • Experience with a wide variety of Deep learning Models (e.g: LSTM, RNN, CNN, GAN, etc.)
Read more
Job posted by
Happy Kantesariya

Project Engineer Intern

at Helical IT Solution

Founded 2012  •  Products & Services  •  20-100 employees  •  Profitable
PySpark
Data engineering
Big Data
Hadoop
Spark
Hibernate (Java)
Jasmine (Javascript Testing Framework)
SQL
Python
icon
Hyderabad
icon
0 - 0 yrs
icon
₹1.2L - ₹3.5L / yr

Job description

About Company
Helical Insight an open source Business Intelligence tool from Helical IT Solutions Pvt. Ltd,
based out of Hyderabad, is looking for fresher’s having strong knowledge on SQL. Helical
Insight has more than 50+ clients from various sectors. It has been awarded the most promising
company in the Business Intelligence space. We are looking for rockstar team mate to join
our company.
Job Brief
We are looking for a Business Intelligence (BI) Developer to create and manage BI and analytics
solutions that turn data into knowledge.
In this role, you should have a background in data and business analysis. You should be
analytical and an excellent communicator. If you also have a business acumen and
problemsolving aptitude, we’d like to meet you. Excellent knowledge on SQLQuery is required.
Basic knowledge on HTML CSS and JS is required.
You would be working closely with customers of various domain to understand their data,
understand their business requirement and deliver the required analytics in form of varous
reports dashboards etc. Excellent client interfacing role with opportunity to work across various
sectors and geographies as well as varioud kind of DB including NoSQL, RDBMS, graph db,
Columnar DB etc
Skill set and Qualification required
Responsibilities
 Attending client calls to get requriement, show progress
 Translate business needs to technical specifications
 Design, build and deploy BI solutions (e.g. reporting tools)
 Maintain and support data analytics platforms)
 Conduct unit testing and troubleshooting
 Evaluate and improve existing BI systems
 Collaborate with teams to integrate systems
 Develop and execute database queries and conduct analyses
 Create visualizations and reports for requested projects
 Develop and update technical documentation
Requirements
 Excellent expertise on SQLQueries
 Proven experience as a BI Developer or Data Scientist
 Background in data warehouse design (e.g. dimensional modeling) and data mining
 In-depth understanding of database management systems, online analytical processing
(OLAP) and ETL (Extract, transform, load) framework
 Familiarity with BI technologies
 Proven abilities to take initiative and be innovative
 Analytical mind with a problem-solving aptitude
 BE in Computer Science/IT
Education: BE/ BTech/ MCA/BCA/ MTech/ MS, or equivalent preferred.
Interested candidates call us on +91 7569 765 162
Read more
Job posted by
Bhavani Thanga

Python developer

at Gauge Data Solutions Pvt Ltd

Founded 2014  •  Products & Services  •  20-100 employees  • 
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
Artificial Intelligence (AI)
Python
OOAD
Data storage
recommendation algorithm
icon
Noida
icon
0 - 4 yrs
icon
₹3L - ₹8L / yr

Essential Skills :

- Develop, enhance and maintain Python related projects, data services, platforms and processes.

- Apply and maintain data quality checks to ensure data integrity and completeness.

- Able to integrate multiple data sources and databases.

- Collaborate with cross-functional teams across, Decision Sciences, Search, Database Management. To design innovative solutions, capture requirements and drive a common future vision.

Technical Skills/Capabilities :

- Hands on experience in Python programming language.

- Understanding and proven application of Computer Science fundamentals in object oriented design, data structures, algorithm design, Regular expressions, data storage procedures, problem solving, and complexity analysis.

- Understanding of natural language processing and basic ML algorithms will be a plus.

- Good troubleshooting and debugging skills.

- Strong individual contributor, self-motivated, and a proven team player.

- Eager to learn and develop new experience and skills.

- Good communication and interpersonal skills.

About Company Profile :

Gauge Data Solutions Pvt Ltd :

- We are a leading company into Data Science, Machine learning and Artificial Intelligence.

- Within Gauge data we have a competitive environment for the Developers and Engineers.

- We at Gauge create potential solutions for the real world problems. One such example of our engineering is Casemine.

- Casemine is a legal research platform powered by Artificial Intelligence. It helps lawyers, judges and law researchers in their day to day life.

- Casemine provides exhaustive case results to its users with the use of cutting edge technologies.

- It is developed with the efforts of great engineers at Gauge Data.

- One such opportunity is now open for you. We at Gauge Data invites application for competitive, self motivated Python Developer.

Purpose of the Role :

- This position will play a central role in developing new features and enhancements for the products and services at Gauge Data.

- To know more about what we do and how we do it, feel free to read these articles:

- https://bit.ly/2YfVAsv

- https://bit.ly/2rQArJc

- You can also visit us at https://www.casemine.com/.

- For more information visit us at: - www.gaugeanalytics.com

- Join us on LinkedIn, Twitter & Facebook
Read more
Job posted by
Deeksha Dewal
PySpark
Python
Spark
icon
Bengaluru (Bangalore)
icon
3 - 7 yrs
icon
₹8L - ₹16L / yr
Roles and Responsibilities:

• Responsible for developing and maintaining applications with PySpark 
• Contribute to the overall design and architecture of the application developed and deployed.
• Performance Tuning wrt to executor sizing and other environmental parameters, code optimization, partitions tuning, etc.
• Interact with business users to understand requirements and troubleshoot issues.
• Implement Projects based on functional specifications.

Must-Have Skills:

• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ETL architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skills
Read more
Job posted by
Priyanka U

Data Scientist

at Searce Inc

Founded 2004  •  Products & Services  •  100-1000 employees  •  Profitable
Data Science
R Programming
Python
Deep Learning
Hadoop
Neural networks
Natural Language Processing (NLP)
Machine Learning (ML)
Computer Vision
TensorFlow
PyTorch
Artificial Intelligence (AI)
icon
Pune, Bengaluru (Bangalore)
icon
3 - 5 yrs
icon
₹10L - ₹15L / yr

Data Scientist - Applied AI


Who we are?

Searce is  a niche’ Cloud Consulting business with futuristic tech DNA. We do new-age tech to realise the “Next” in the “Now” for our Clients. We specialise in Cloud Data Engineering, AI/Machine Learning and Advanced Cloud infra tech such as Anthos and Kubernetes. We are one of the top & the fastest growing partners for Google Cloud and AWS globally with over 2,500 clients successfully moved to cloud.


What do we believe?

  • Best practices are overrated
      • Implementing best practices can only make one an average .
  • Honesty and Transparency
      • We believe in naked truth. We do what we tell and tell what we do.
  • Client Partnership
    • Client - Vendor relationship: No. We partner with clients instead. 
    • And our sales team comprises 100% of our clients.

How do we work ?

It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER.

  • Humble: Happy people don’t carry ego around. We listen to understand; not to respond.
  • Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about.
  • Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it.
  • Passionate: We are as passionate about the great street-food vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver.
  • Innovative: Innovate or Die. We love to challenge the status quo.
  • Experimental: We encourage curiosity & making mistakes.
  • Responsible: Driven. Self motivated. Self governing teams. We own it.

So, what are we hunting for ?

As a Data Scientist, you will help develop and enhance the algorithms and technology that powers our unique system. This role covers a wide range of challenges,

from developing new models using pre-existing components to enable current systems to

be more intelligent. You should be able to train models using existing data and use them in

most creative manner to deliver the smartest experience to customers. You will have to

develop multiple AI applications that push the threshold of intelligence in machines.


Working on multiple projects at a time, you will have to maintain a consistently high level of attention to detail while finding creative ways to provide analytical insights. You will also have to thrive in a fast, high-energy environment and should be able to balance multiple projects in real-time. The thrill of the next big challenge should drive you, and when faced with an obstacle, you should be able to find clever solutions.You must have the ability and interest to work on a range of different types of projects and business processes, and must have a background that demonstrates this ability.






Your bucket of Undertakings :

  1. Collaborate with team members to develop new models to be used for classification problems
  2. Work on software profiling, performance tuning and analysis, and other general software engineering tasks
  3. Use independent judgment to take existing data and build new models from it
  4. Collaborate and provide technical guidance and come up with new ideas,rapid prototyping and converting prototypes into scalable products
  5. Conduct experiments to assess the accuracy and recall of language processing modules and to study the effect of such experiences
  6. Lead AI R&D initiatives to include prototypes and minimum viable products
  7. Work closely with multiple teams on projects like Visual quality inspection, ML Ops, Conversational banking, Demand forecasting, Anomaly detection etc. 
  8. Build reusable and scalable solutions for use across the customer base
  9. Prototype and demonstrate AI related products and solutions for customers
  10. Assist business development teams in the expansion and enhancement of a pipeline to support short- and long-range growth plans
  11. Identify new business opportunities and prioritize pursuits for AI 
  12. Participate in long range strategic planning activities designed to meet the Company’s objectives and to increase its enterprise value and revenue goals

Education & Experience : 

  1. BE/B.Tech/Masters in a quantitative field such as CS, EE, Information sciences, Statistics, Mathematics, Economics, Operations Research, or related, with focus on applied and foundational Machine Learning , AI , NLP and/or / data-driven statistical analysis & modelling
  2. 3+ years of Experience majorly in applying AI/ML/ NLP / deep learning / data-driven statistical analysis & modelling solutions to multiple domains, including financial engineering, financial processes a plus
  3. Strong, proven programming skills and with machine learning and deep learning and Big data frameworks including TensorFlow, Caffe, Spark, Hadoop. Experience with writing complex programs and implementing custom algorithms in these and other environments
  4. Experience beyond using open source tools as-is, and writing custom code on top of, or in addition to, existing open source frameworks
  5. Proven capability in demonstrating successful advanced technology solutions (either prototypes, POCs, well-cited research publications, and/or products) using ML/AI/NLP/data science in one or more domains
  6. Research and implement novel machine learning and statistical approaches
  7. Experience in data management, data analytics middleware, platforms and infrastructure, cloud and fog computing is a plus
  8. Excellent communication skills (oral and written) to explain complex algorithms, solutions to stakeholders across multiple disciplines, and ability to work in a diverse team

Accomplishment Set: 


  1. Extensive experience with Hadoop and Machine learning algorithms
  2. Exposure to Deep Learning, Neural Networks, or related fields and a strong interest and desire to pursue them
  3. Experience in Natural Language Processing, Computer Vision, Machine Learning or Machine Intelligence (Artificial Intelligence)
  4. Passion for solving NLP problems
  5. Experience with specialized tools and project for working with natural language processing
  6. Knowledge of machine learning frameworks like Tensorflow, Pytorch
  7. Experience with software version control systems like Github
  8. Fast learner and be able to work independently as well as in a team environment with good written and verbal communication skills
Read more
Job posted by
Mishita Juneja
Data steward
MDM
Tamr
Reltio
Data engineering
Python
ETL
SQL
Windows Azure
sas
dm studio
profisee
icon
NCR (Delhi | Gurgaon | Noida), Bengaluru (Bangalore), Mumbai, Pune
icon
7 - 8 yrs
icon
₹15L - ₹16L / yr
  1. Data Steward :

Data Steward will collaborate and work closely within the group software engineering and business division. Data Steward has overall accountability for the group's / Divisions overall data and reporting posture by responsibly managing data assets, data lineage, and data access, supporting sound data analysis. This role requires focus on data strategy, execution, and support for projects, programs, application enhancements, and production data fixes. Makes well-thought-out decisions on complex or ambiguous data issues and establishes the data stewardship and information management strategy and direction for the group. Effectively communicates to individuals at various levels of the technical and business communities. This individual will become part of the corporate Data Quality and Data management/entity resolution team supporting various systems across the board.

 

Primary Responsibilities:

 

  • Responsible for data quality and data accuracy across all group/division delivery initiatives.
  • Responsible for data analysis, data profiling, data modeling, and data mapping capabilities.
  • Responsible for reviewing and governing data queries and DML.
  • Accountable for the assessment, delivery, quality, accuracy, and tracking of any production data fixes.
  • Accountable for the performance, quality, and alignment to requirements for all data query design and development.
  • Responsible for defining standards and best practices for data analysis, modeling, and queries.
  • Responsible for understanding end-to-end data flows and identifying data dependencies in support of delivery, release, and change management.
  • Responsible for the development and maintenance of an enterprise data dictionary that is aligned to data assets and the business glossary for the group responsible for the definition and maintenance of the group's data landscape including overlays with the technology landscape, end-to-end data flow/transformations, and data lineage.
  • Responsible for rationalizing the group's reporting posture through the definition and maintenance of a reporting strategy and roadmap.
  • Partners with the data governance team to ensure data solutions adhere to the organization’s data principles and guidelines.
  • Owns group's data assets including reports, data warehouse, etc.
  • Understand customer business use cases and be able to translate them to technical specifications and vision on how to implement a solution.
  • Accountable for defining the performance tuning needs for all group data assets and managing the implementation of those requirements within the context of group initiatives as well as steady-state production.
  • Partners with others in test data management and masking strategies and the creation of a reusable test data repository.
  • Responsible for solving data-related issues and communicating resolutions with other solution domains.
  • Actively and consistently support all efforts to simplify and enhance the Clinical Trial Predication use cases.
  • Apply knowledge in analytic and statistical algorithms to help customers explore methods to improve their business.
  • Contribute toward analytical research projects through all stages including concept formulation, determination of appropriate statistical methodology, data manipulation, research evaluation, and final research report.
  • Visualize and report data findings creatively in a variety of visual formats that appropriately provide insight to the stakeholders.
  • Achieve defined project goals within customer deadlines; proactively communicate status and escalate issues as needed.

 

Additional Responsibilities:

 

  • Strong understanding of the Software Development Life Cycle (SDLC) with Agile Methodologies
  • Knowledge and understanding of industry-standard/best practices requirements gathering methodologies.
  • Knowledge and understanding of Information Technology systems and software development.
  • Experience with data modeling and test data management tools.
  • Experience in the data integration project • Good problem solving & decision-making skills.
  • Good communication skills within the team, site, and with the customer

 

Knowledge, Skills and Abilities

 

  • Technical expertise in data architecture principles and design aspects of various DBMS and reporting concepts.
  • Solid understanding of key DBMS platforms like SQL Server, Azure SQL
  • Results-oriented, diligent, and works with a sense of urgency. Assertive, responsible for his/her own work (self-directed), have a strong affinity for defining work in deliverables, and be willing to commit to deadlines.
  • Experience in MDM tools like MS DQ, SAS DM Studio, Tamr, Profisee, Reltio etc.
  • Experience in Report and Dashboard development
  • Statistical and Machine Learning models
  • Python (sklearn, numpy, pandas, genism)
  • Nice to Have:
  • 1yr of ETL experience
  • Natural Language Processing
  • Neural networks and Deep learning
  • xperience in keras,tensorflow,spacy, nltk, LightGBM python library

 

Interaction :  Frequently interacts with subordinate supervisors.

Education : Bachelor’s degree, preferably in Computer Science, B.E or other quantitative field related to the area of assignment. Professional certification related to the area of assignment may be required

Experience :  7 years of Pharmaceutical /Biotech/life sciences experience, 5 years of Clinical Trials experience and knowledge, Excellent Documentation, Communication, and Presentation Skills including PowerPoint

 

Read more
Job posted by
RAHUL BATTA

Computer vision Scientist at Checko

at TransPacks Technologies IIT Kanpur

Founded 2017  •  Product  •  20-100 employees  •  Raised funding
Computer Vision
C++
Java
Algorithms
Data Structures
Performance Evaluation
Debugging
Data Science
icon
Hyderabad
icon
0 - 4 yrs
icon
₹3L - ₹6L / yr

Responsibilities

  • Own the design, development, testing, deployment, and craftsmanship of the team’s infrastructure and systems capable of handling massive amounts of requests with high reliability and scalability
  • Leverage the deep and broad technical expertise to mentor engineers and provide leadership on resolving complex technology issues
  • Entrepreneurial and out-of-box thinking essential for a technology startup
  • Guide the team for unit-test code for robustness, including edge cases, usability, and general reliability

 

Requirements

  • In-depth understanding of image processing algorithms, pattern recognition methods, and rule-based classifiers
  • Experience in feature extraction, object recognition and tracking, image registration, noise reduction, image calibration, and correction
  • Ability to understand, optimize and debug imaging algorithms
  • Understating and experience in openCV library
  • Fundamental understanding of mathematical techniques involved in ML and DL schemas (Instance-based methods, Boosting methods, PGM, Neural Networks etc.)
  • Thorough understanding of state-of-the-art DL concepts (Sequence modeling, Attention, Convolution etc.) along with knack to imagine new schemas that work for the given data.
  • Understanding of engineering principles and a clear understanding of data structures and algorithms
  • Experience in writing production level codes using either C++ or Java
  • Experience with technologies/libraries such as python pandas, numpy, scipy
  • Experience with tensorflow and scikit.
Read more
Job posted by
Pranav Asthana

Data Scientist - Precily AI

at Precily Private Limited

Founded 2016  •  Product  •  20-100 employees  •  Raised funding
Data Science
Artificial Intelligence (AI)
R Programming
Python
icon
Bengaluru (Bangalore), NCR (Delhi | Gurgaon | Noida)
icon
3 - 7 yrs
icon
₹4L - ₹25L / yr
Job Description – Data Scientist About Company Profile Precily is a startup headquartered in Noida, IN. Precily is currently working with leading consulting & law firms, research firms & technology companies. Aura (Precily AI) is data-analysis platform for enterprises that increase the efficiency of the workforce by providing AI-based solutions. Responsibilities & Skills Required: The role requires deep knowledge in designing, planning, testing and deploying analytics solutions including the following: • Natural Language Processing (NLP), Neural Networks , Text Clustering, Topic Modelling, Information Extraction, Information Retrieval, Deep learning, Machine learning, cognitive science and analytics. • Proven experience implementing and deploying advanced AI solutions using R/Python. • Apply machine learning algorithms, statistical data analysis, text clustering, summarization, extracting insights from multiple data points. • Excellent understanding of Analytics concepts and methodologies including machine learning (unsupervised and supervised). • Hand on in handling large amounts of structured and unstructured data. • Measure, interpret, and derive learning from results of analysis that will lead to improvements document processing. Skills Required: • Python, R, NLP, NLG, Machine Learning, Deep Learning & Neural Networks • Word Vectorizers • Word Embeddings ( word2vec & GloVe ) • RNN ( CNN vs RNN ) • LSTM & GRU ( LSTM vs GRU ) • Pretrained Embeddings ( Implementation in RNN ) • Unsupervised Learning • Supervised Learning • Deep Neural Networks • Framework : Keras/tensorflow • Keras Embedding Layer output Please reach out to us: [email protected]
Read more
Job posted by
Bharath Rao
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Codalyze Technologies?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort