Cutshort logo
Generalized linear model jobs

11+ Generalized linear model Jobs in India

Apply to 11+ Generalized linear model Jobs on CutShort.io. Find your next job, effortlessly. Browse Generalized linear model Jobs and apply today!

icon
Banyan Data Services

at Banyan Data Services

1 recruiter
Sathish Kumar
Posted by Sathish Kumar
Bengaluru (Bangalore)
3 - 15 yrs
₹6L - ₹20L / yr
skill iconData Science
Data Scientist
skill iconMongoDB
skill iconJava
Big Data
+14 more

Senior Big Data Engineer 

Note:   Notice Period : 45 days 

Banyan Data Services (BDS) is a US-based data-focused Company that specializes in comprehensive data solutions and services, headquartered in San Jose, California, USA. 

 

We are looking for a Senior Hadoop Bigdata Engineer who has expertise in solving complex data problems across a big data platform. You will be a part of our development team based out of Bangalore. This team focuses on the most innovative and emerging data infrastructure software and services to support highly scalable and available infrastructure. 

 

It's a once-in-a-lifetime opportunity to join our rocket ship startup run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer that address next-gen data evolution challenges. 

 

 

Key Qualifications

 

·   5+ years of experience working with Java and Spring technologies

· At least 3 years of programming experience working with Spark on big data; including experience with data profiling and building transformations

· Knowledge of microservices architecture is plus 

· Experience with any NoSQL databases such as HBase, MongoDB, or Cassandra

· Experience with Kafka or any streaming tools

· Knowledge of Scala would be preferable

· Experience with agile application development 

· Exposure of any Cloud Technologies including containers and Kubernetes 

· Demonstrated experience of performing DevOps for platforms 

· Strong Skillsets in Data Structures & Algorithm in using efficient way of code complexity

· Exposure to Graph databases

· Passion for learning new technologies and the ability to do so quickly 

· A Bachelor's degree in a computer-related field or equivalent professional experience is required

 

Key Responsibilities

 

· Scope and deliver solutions with the ability to design solutions independently based on high-level architecture

· Design and develop the big data-focused micro-Services

· Involve in big data infrastructure, distributed systems, data modeling, and query processing

· Build software with cutting-edge technologies on cloud

· Willing to learn new technologies and research-orientated projects 

· Proven interpersonal skills while contributing to team effort by accomplishing related results as needed 

Read more
Sadup Softech

at Sadup Softech

1 recruiter
madhuri g
Posted by madhuri g
Remote only
4 - 6 yrs
₹4L - ₹15L / yr
Google Cloud Platform (GCP)
big query
PySpark
Data engineering
Big Data
+2 more

Job Description:

We are seeking a talented Machine Learning Engineer with expertise in software engineering to join our team. As a Machine Learning Engineer, your primary responsibility will be to develop machine learning (ML) solutions that focus on technology process improvements. Specifically, you will be working on projects involving ML & Generative AI solutions for Technology & Data Management Efficiencies such as optimal cloud computing, knowledge bots, Software Code Assistants, Automatic Data Management etc

 

Responsibilities:

- Collaborate with cross-functional teams to identify opportunities for technology process improvements that can be solved using machine learning and generative AI.

- Define and build innovate ML and Generative AI systems such as AI Assistants for varied SDLC tasks, and improve Data & Infrastructure management etc. 

- Design and develop ML Engineering Solutions, generative AI Applications & Fine-Tuning Large Language Models (LLMs) for above ensuring scalability, efficiency, and maintainability of such solutions.

- Implement prompt engineering techniques to fine-tune and enhance LLMs for better performance and application-specific needs.

- Stay abreast of the latest advancements in the field of Generative AI and actively contribute to the research and development of new ML & Generative AI Solutions.

 

Requirements:

- A Master's or Ph.D. degree in Computer Science, Statistics, Data Science, or a related field.

- Proven experience working as a Software Engineer, with a focus on ML Engineering and exposure to Generative AI Applications such as chatGPT.

- Strong proficiency in programming languages such as Java, Scala, Python, Google Cloud, Biq Query, Hadoop & Spark etc

- Solid knowledge of software engineering best practices, including version control systems (e.g., Git), code reviews, and testing methodologies.

- Familiarity with large language models (LLMs), prompt engineering techniques, vector DB's, embedding & various fine-tuning techniques.

- Strong communication skills to effectively collaborate and present findings to both technical and non-technical stakeholders.

- Proven ability to adapt and learn new technologies and frameworks quickly.

- A proactive mindset with a passion for continuous learning and research in the field of Generative AI.

 

If you are a skilled and innovative Data Scientist with a passion for Generative AI, and have a desire to contribute to technology process improvements, we would love to hear from you. Join our team and help shape the future of our AI Driven Technology Solutions.

Read more
Kloud9 Technologies
Prem Kumar
Posted by Prem Kumar
Bengaluru (Bangalore)
3 - 7 yrs
₹12L - ₹24L / yr
skill iconMachine Learning (ML)
skill iconData Science
skill iconPython
skill iconJava
skill iconR Programming

About Kloud9:

 

Kloud9 exists with the sole purpose of providing cloud expertise to the retail industry. Our team of cloud architects, engineers and developers help retailers launch a successful cloud initiative so you can quickly realise the benefits of cloud technology. Our standardised, proven cloud adoption methodologies reduce the cloud adoption time and effort so you can directly benefit from lower migration costs.

 

Kloud9 was founded with the vision of bridging the gap between E-commerce and cloud. The E-commerce of any industry is limiting and poses a huge challenge in terms of the finances spent on physical data structures.

 

At Kloud9, we know migrating to the cloud is the single most significant technology shift your company faces today. We are your trusted advisors in transformation and are determined to build a deep partnership along the way. Our cloud and retail experts will ease your transition to the cloud.

 

Our sole focus is to provide cloud expertise to retail industry giving our clients the empowerment that will take their business to the next level. Our team of proficient architects, engineers and developers have been designing, building and implementing solutions for retailers for an average of more than 20 years.

 

We are a cloud vendor that is both platform and technology independent. Our vendor independence not just provides us with a unique perspective into the cloud market but also ensures that we deliver the cloud solutions available that best meet our clients' requirements.

 

Responsibilities:

●       Studying, transforming, and converting data science prototypes

●       Deploying models to production

●       Training and retraining models as needed

●       Analyzing the ML algorithms that could be used to solve a given problem and ranking them by their respective scores

●       Analyzing the errors of the model and designing strategies to overcome them

●       Identifying differences in data distribution that could affect model performance in real-world situations

●       Performing statistical analysis and using results to improve models

●       Supervising the data acquisition process if more data is needed

●       Defining data augmentation pipelines

●       Defining the pre-processing or feature engineering to be done on a given dataset

●       To extend and enrich existing ML frameworks and libraries

●       Understanding when the findings can be applied to business decisions

●       Documenting machine learning processes

 

Basic requirements: 

 

●       4+ years of IT experience in which at least 2+ years of relevant experience primarily in converting data science prototypes and deploying models to production

●       Proficiency with Python and machine learning libraries such as scikit-learn, matplotlib, seaborn and pandas

●       Knowledge of Big Data frameworks like Hadoop, Spark, Pig, Hive, Flume, etc

●       Experience in working with ML frameworks like TensorFlow, Keras, OpenCV

●       Strong written and verbal communications

●       Excellent interpersonal and collaboration skills.

●       Expertise in visualizing and manipulating big datasets

●       Familiarity with Linux

●       Ability to select hardware to run an ML model with the required latency

●       Robust data modelling and data architecture skills.

●       Advanced degree in Computer Science/Math/Statistics or a related discipline.

●       Advanced Math and Statistics skills (linear algebra, calculus, Bayesian statistics, mean, median, variance, etc.)

 

Nice to have

●       Familiarity with Java, and R code writing.

●       Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world

●       Verifying data quality, and/or ensuring it via data cleaning

●       Supervising the data acquisition process if more data is needed

●       Finding available datasets online that could be used for training

 

Why Explore a Career at Kloud9:

 

With job opportunities in prime locations of US, London, Poland and Bengaluru, we help build your career paths in cutting edge technologies of AI, Machine Learning and Data Science. Be part of an inclusive and diverse workforce that's changing the face of retail technology with their creativity and innovative solutions. Our vested interest in our employees translates to deliver the best products and solutions to our customers.

Read more
Ganit Business Solutions

at Ganit Business Solutions

3 recruiters
Viswanath Subramanian
Posted by Viswanath Subramanian
Chennai, Bengaluru (Bangalore), Mumbai
4 - 6 yrs
₹7L - ₹15L / yr
SQL
skill iconAmazon Web Services (AWS)
Data Warehouse (DWH)
Informatica
ETL
+1 more

Responsibilities:

  • Must be able to write quality code and build secure, highly available systems.
  • Assemble large, complex datasets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing datadelivery, re-designing infrastructure for greater scalability, etc with the guidance.
  • Create datatools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Monitoring performance and advising any necessary infrastructure changes.
  • Defining dataretention policies.
  • Implementing the ETL process and optimal data pipeline architecture
  • Build analytics tools that utilize the datapipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
  • Create design documents that describe the functionality, capacity, architecture, and process.
  • Develop, test, and implement datasolutions based on finalized design documents.
  • Work with dataand analytics experts to strive for greater functionality in our data
  • Proactively identify potential production issues and recommend and implement solutions

Skillsets:

  • Good understanding of optimal extraction, transformation, and loading of datafrom a wide variety of data sources using SQL and AWS ‘big data’ technologies.
  • Proficient understanding of distributed computing principles
  • Experience in working with batch processing/ real-time systems using various open-source technologies like NoSQL, Spark, Pig, Hive, Apache Airflow.
  • Implemented complex projects dealing with the considerable datasize (PB).
  • Optimization techniques (performance, scalability, monitoring, etc.)
  • Experience with integration of datafrom multiple data sources
  • Experience with NoSQL databases, such as HBase, Cassandra, MongoDB, etc.,
  • Knowledge of various ETL techniques and frameworks, such as Flume
  • Experience with various messaging systems, such as Kafka or RabbitMQ
  • Good understanding of Lambda Architecture, along with its advantages and drawbacks
  • Creation of DAGs for dataengineering
  • Expert at Python /Scala programming, especially for dataengineering/ ETL purposes
Read more
Surplus Hand
Agency job
via SurplusHand by Anju John
Remote, Hyderabad
3 - 5 yrs
₹10L - ₹14L / yr
Apache Hadoop
Apache Hive
PySpark
Big Data
skill iconJava
+3 more
Tech Skills:
• Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc)
• should have good hands-on Spark (spark with java/PySpark)
• Hive
• must be good with SQL's(spark SQL/ HiveQL)
• Application design, software development and automated testing
Environment Experience:
• Experience with implementing integrated automated release management using tools/technologies/frameworks like Maven, Git, code/security review tools, Jenkins, Automated testing, and Junit.
• Demonstrated experience with Agile or other rapid application development methods
• Cloud development (AWS/Azure/GCP)
• Unix / Shell scripting
• Web services , open API development, and REST concepts
Read more
Tredence
Sharon Joseph
Posted by Sharon Joseph
Bengaluru (Bangalore), Gurugram, Chennai, Pune
7 - 10 yrs
Best in industry
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
skill iconPython
+1 more

Job Summary

As a Data Science Lead, you will manage multiple consulting projects of varying complexity and ensure on-time and on-budget delivery for clients. You will lead a team of data scientists and collaborate across cross-functional groups, while contributing to new business development, supporting strategic business decisions and maintaining & strengthening client base

  1. Work with team to define business requirements, come up with analytical solution and deliver the solution with specific focus on Big Picture to drive robustness of the solution
  2. Work with teams of smart collaborators. Be responsible for their appraisals and career development.
  3. Participate and lead executive presentations with client leadership stakeholders.
  4. Be part of an inclusive and open environment. A culture where making mistakes and learning from them is part of life
  5. See how your work contributes to building an organization and be able to drive Org level initiatives that will challenge and grow your capabilities.

​​​​​​Role & Responsibilities

  1. Serve as expert in Data Science, build framework to develop Production level DS/AI models.
  2. Apply AI research and ML models to accelerate business innovation and solve impactful business problems for our clients.
  3. Lead multiple teams across clients ensuring quality and timely outcomes on all projects.
  4. Lead and manage the onsite-offshore relation, at the same time adding value to the client.
  5. Partner with business and technical stakeholders to translate challenging business problems into state-of-the-art data science solutions.
  6. Build a winning team focused on client success. Help team members build lasting career in data science and create a constant learning/development environment.
  7. Present results, insights, and recommendations to senior management with an emphasis on the business impact.
  8. Build engaging rapport with client leadership through relevant conversations and genuine business recommendations that impact the growth and profitability of the organization.
  9. Lead or contribute to org level initiatives to build the Tredence of tomorrow.

 

Qualification & Experience

  1. Bachelor's /Master's /PhD degree in a quantitative field (CS, Machine learning, Mathematics, Statistics, Data Science) or equivalent experience.
  2. 6-10+ years of experience in data science, building hands-on ML models
  3. Expertise in ML – Regression, Classification, Clustering, Time Series Modeling, Graph Network, Recommender System, Bayesian modeling, Deep learning, Computer Vision, NLP/NLU, Reinforcement learning, Federated Learning, Meta Learning.
  4. Proficient in some or all of the following techniques: Linear & Logistic Regression, Decision Trees, Random Forests, K-Nearest Neighbors, Support Vector Machines ANOVA , Principal Component Analysis, Gradient Boosted Trees, ANN, CNN, RNN, Transformers.
  5. Knowledge of programming languages SQL, Python/ R, Spark.
  6. Expertise in ML frameworks and libraries (TensorFlow, Keras, PyTorch).
  7. Experience with cloud computing services (AWS, GCP or Azure)
  8. Expert in Statistical Modelling & Algorithms E.g. Hypothesis testing, Sample size estimation, A/B testing
  9. Knowledge in Mathematical programming – Linear Programming, Mixed Integer Programming etc , Stochastic Modelling – Markov chains, Monte Carlo, Stochastic Simulation, Queuing Models.
  10. Experience with Optimization Solvers (Gurobi, Cplex) and Algebraic programming Languages(PulP)
  11. Knowledge in GPU code optimization, Spark MLlib Optimization.
  12. Familiarity to deploy and monitor ML models in production, delivering data products to end-users.
  13. Experience with ML CI/CD pipelines.
Read more
Oneistox India Pvt Ltd
Neelu Singh
Posted by Neelu Singh
Gurugram
2 - 5 yrs
₹4L - ₹9L / yr
skill iconData Analytics
Data Visualization
PowerBI
Tableau
Qlikview
+9 more

Job Description

Function: Product → Product Analytics

 

Responsibilities:

 

  • Assist product managers in the formulation of the company's product strategy using structured data and insights derived from the same
  • Conduct research, create business cases and translate them into meaningful problems to solve
  • Measure impact of experiments related to function, analysing and helping in course correction.
  • Recommending product improvements based on analytical findings. Defining new metrics, techniques, and strategies to improve performance.
  • Constantly monitor and analyse metrics identified, publish insights/any anomalies along with hypothesis
  • Translating business requirements and user requests into effective report and dashboard designs in challenging deadlines.
  • Assist with performance tuning of dashboards, background data queries as needed

 

Key Skills Required:

 

  • Bachelor’s degree along with 2+ years experience in product analytics building data sets, reports, and dashboards
  • Strong analytics skills and experience in Metabase, Google Analytics, Power BI, or other analytics software
  • Proficiency with SQL
  • Agile ability to anticipate need, be responsive and adapt to change
  • Strong interpersonal and relationship skills, ability to influence decisions and gain consensus
  • Excellent time and project management skills, ability to prioritise the most important projects to create business impact

Perks at Oneistox:

 

  • Challenging work, High Product Ownership, and Steep Learning Curve are guaranteed!
  • You get to be part of a highly young and energetic team.
  • Envisage the growth of a company from 5X to 500X.
  • Industry standard compensation and ESOPS.
Read more
KARZA
Agency job
via Seven N Half by Viral Jain
Remote only
2 - 5 yrs
₹6L - ₹19L / yr
RNN
skill iconDeep Learning
skill iconMachine Learning (ML)
Sentiment Analysis
LSTM
+2 more
Identify and integrate new datasets that can be leveraged through our product capabilities and work closely
with the engineering team to strategize and execute the development of data products
● Execute analytical experiments methodically to help solve various problems and make a true impact across
various domains and industries
NLP ENGINEER at KARZA TECHNOLOGIES
● Identify relevant data sources and sets to mine for client business needs, and collect large structured and
unstructured datasets and variables
● Devise and utilize algorithms and models to mine big data stores, perform data and error analysis to improve
models, and clean and validate data for uniformity and accuracy
● Analyze data for trends and patterns, and Interpret data with a clear objective in mind
● Implement analytical models into production by collaborating with software developers and machine
learning engineers
● Communicate analytic solutions to stakeholders and implement improvements as needed to operational
systems
What you need to work with us:
● Good understanding of data structures, algorithms, and the first principles of mathematics.
● Proficient in Python and using packages like NLTK, Numpy, Pandas
● Should have worked on deep learning frameworks (like Tensorflow, Keras, PyTorch, etc)
● Hands-on experience in Natural Language Processing, Sequence, and RNN Based models
● Mathematical intuition of ML and DL algorithms
● Should be able to perform thorough model evaluation by creating hypotheses on the basis of statistical
analyses
● Should be comfortable in going through open-source code and reading research papers.
● Should be curious or thoughtful enough to answer the “WHYs” pertaining to the most cherished
observations, thumb rules, and ideas across the data science community.
Qualification and Experience Required:
● 1 - 4 years of relevant experience
● Bachelor/ Master’s degree in computer science / Computer Engineering / Information Technology
Read more
Srijan Technologies

at Srijan Technologies

6 recruiters
PriyaSaini
Posted by PriyaSaini
Remote only
2 - 6 yrs
₹8L - ₹13L / yr
PySpark
SQL
Data modeling
Data Warehouse (DWH)
Informatica
+2 more
3+ years of professional work experience with a reputed analytics firm
 Expertise in handling large amount of data through Python or PySpark
 Conduct data assessment, perform data quality checks and transform data using SQL
and ETL tools
 Experience of deploying ETL / data pipelines and workflows in cloud technologies and
architecture such as Azure and Amazon Web Services will be valued
 Comfort with data modelling principles (e.g. database structure, entity relationships, UID
etc.) and software development principles (e.g. modularization, testing, refactoring, etc.)
 A thoughtful and comfortable communicator (verbal and written) with the ability to
facilitate discussions and conduct training
 Track record of strong problem-solving, requirement gathering, and leading by example
 Ability to thrive in a flexible and collaborative environment
 Track record of completing projects successfully on time, within budget and as per scope
Read more
Material Depot

at Material Depot

1 recruiter
Sarthak Agrawal
Posted by Sarthak Agrawal
Hyderabad, Bengaluru (Bangalore)
2 - 6 yrs
₹12L - ₹20L / yr
skill iconData Science
skill iconMachine Learning (ML)
skill iconDeep Learning
skill iconR Programming
skill iconPython
+1 more

Job Description

Do you have a passion for computer vision and deep learning problems? We are looking for someone who thrives on collaboration and wants to push the boundaries of what is possible today! Material Depot (materialdepot.in) is on a mission to be India’s largest tech company in the Architecture, Engineering and Construction space by democratizing the construction ecosystem and bringing stakeholders onto a common digital platform. Our engineering team is responsible for developing Computer Vision and Machine Learning tools to enable digitization across the construction ecosystem. The founding team includes people from top management consulting firms and top colleges in India (like BCG, IITB), and have worked extensively in the construction space globally and is funded by top Indian VCs.

Our team empowers Architectural and Design Businesses to effectively manage their day to day operations. We are seeking an experienced, talented Data Scientist to join our team. You’ll be bringing your talents and expertise to continue building and evolving our highly available and distributed platform.

Our solutions need complex problem solving in computer vision that require robust, efficient, well tested, and clean solutions. The ideal candidate will possess the self-motivation, curiosity, and initiative to achieve those goals. Analogously, the candidate is a lifelong learner who passionately seeks to improve themselves and the quality of their work. You will work together with similar minds in a unique team where your skills and expertise can be used to influence future user experiences that will be used by millions.

In this role, you will:

  • Extensive knowledge in machine learning and deep learning techniques
  • Solid background in image processing/computer vision
  • Experience in building datasets for computer vision tasks
  • Experience working with and creating data structures / architectures
  • Proficiency in at least one major machine learning framework
  • Experience visualizing data to stakeholders
  • Ability to analyze and debug complex algorithms
  • Good understanding and applied experience in classic 2D image processing and segmentation
  • Robust semantic object detection under different lighting conditions
  • Segmentation of non-rigid contours in challenging/low contrast scenarios
  • Sub-pixel accurate refinement of contours and features
  • Experience in image quality assessment
  • Experience with in depth failure analysis of algorithms
  • Highly skilled in at least one scripting language such as Python or Matlab and solid experience in C++
  • Creativity and curiosity for solving highly complex problems
  • Excellent communication and collaboration skills
  • Mentor and support other technical team members in the organization
  • Create, improve, and refine workflows and processes for delivering quality software on time and with carefully calculated debt
  • Work closely with product managers, customer support representatives, and account executives to help the business move fast and efficiently through relentless automation.

How you will do this:

  • You’re part of an agile, multidisciplinary team.
  • You bring your own unique skill set to the table and collaborate with others to accomplish your team’s goals.
  • You prioritize your work with the team and its product owner, weighing both the business and technical value of each task.
  • You experiment, test, try, fail, and learn continuously.
  • You don’t do things just because they were always done that way, you bring your experience and expertise with you and help the team make the best decisions.

For this role, you must have:

  • Strong knowledge of and experience with the functional programming paradigm.
  • Experience conducting code reviews, providing feedback to other engineers.
  • Great communication skills and a proven ability to work as part of a tight-knit team.
Read more
Infogain
Agency job
via Technogen India PvtLtd by RAHUL BATTA
NCR (Delhi | Gurgaon | Noida), Bengaluru (Bangalore), Mumbai, Pune
7 - 8 yrs
₹15L - ₹16L / yr
Data steward
MDM
Tamr
Reltio
Data engineering
+7 more
  1. Data Steward :

Data Steward will collaborate and work closely within the group software engineering and business division. Data Steward has overall accountability for the group's / Divisions overall data and reporting posture by responsibly managing data assets, data lineage, and data access, supporting sound data analysis. This role requires focus on data strategy, execution, and support for projects, programs, application enhancements, and production data fixes. Makes well-thought-out decisions on complex or ambiguous data issues and establishes the data stewardship and information management strategy and direction for the group. Effectively communicates to individuals at various levels of the technical and business communities. This individual will become part of the corporate Data Quality and Data management/entity resolution team supporting various systems across the board.

 

Primary Responsibilities:

 

  • Responsible for data quality and data accuracy across all group/division delivery initiatives.
  • Responsible for data analysis, data profiling, data modeling, and data mapping capabilities.
  • Responsible for reviewing and governing data queries and DML.
  • Accountable for the assessment, delivery, quality, accuracy, and tracking of any production data fixes.
  • Accountable for the performance, quality, and alignment to requirements for all data query design and development.
  • Responsible for defining standards and best practices for data analysis, modeling, and queries.
  • Responsible for understanding end-to-end data flows and identifying data dependencies in support of delivery, release, and change management.
  • Responsible for the development and maintenance of an enterprise data dictionary that is aligned to data assets and the business glossary for the group responsible for the definition and maintenance of the group's data landscape including overlays with the technology landscape, end-to-end data flow/transformations, and data lineage.
  • Responsible for rationalizing the group's reporting posture through the definition and maintenance of a reporting strategy and roadmap.
  • Partners with the data governance team to ensure data solutions adhere to the organization’s data principles and guidelines.
  • Owns group's data assets including reports, data warehouse, etc.
  • Understand customer business use cases and be able to translate them to technical specifications and vision on how to implement a solution.
  • Accountable for defining the performance tuning needs for all group data assets and managing the implementation of those requirements within the context of group initiatives as well as steady-state production.
  • Partners with others in test data management and masking strategies and the creation of a reusable test data repository.
  • Responsible for solving data-related issues and communicating resolutions with other solution domains.
  • Actively and consistently support all efforts to simplify and enhance the Clinical Trial Predication use cases.
  • Apply knowledge in analytic and statistical algorithms to help customers explore methods to improve their business.
  • Contribute toward analytical research projects through all stages including concept formulation, determination of appropriate statistical methodology, data manipulation, research evaluation, and final research report.
  • Visualize and report data findings creatively in a variety of visual formats that appropriately provide insight to the stakeholders.
  • Achieve defined project goals within customer deadlines; proactively communicate status and escalate issues as needed.

 

Additional Responsibilities:

 

  • Strong understanding of the Software Development Life Cycle (SDLC) with Agile Methodologies
  • Knowledge and understanding of industry-standard/best practices requirements gathering methodologies.
  • Knowledge and understanding of Information Technology systems and software development.
  • Experience with data modeling and test data management tools.
  • Experience in the data integration project • Good problem solving & decision-making skills.
  • Good communication skills within the team, site, and with the customer

 

Knowledge, Skills and Abilities

 

  • Technical expertise in data architecture principles and design aspects of various DBMS and reporting concepts.
  • Solid understanding of key DBMS platforms like SQL Server, Azure SQL
  • Results-oriented, diligent, and works with a sense of urgency. Assertive, responsible for his/her own work (self-directed), have a strong affinity for defining work in deliverables, and be willing to commit to deadlines.
  • Experience in MDM tools like MS DQ, SAS DM Studio, Tamr, Profisee, Reltio etc.
  • Experience in Report and Dashboard development
  • Statistical and Machine Learning models
  • Python (sklearn, numpy, pandas, genism)
  • Nice to Have:
  • 1yr of ETL experience
  • Natural Language Processing
  • Neural networks and Deep learning
  • xperience in keras,tensorflow,spacy, nltk, LightGBM python library

 

Interaction :  Frequently interacts with subordinate supervisors.

Education : Bachelor’s degree, preferably in Computer Science, B.E or other quantitative field related to the area of assignment. Professional certification related to the area of assignment may be required

Experience :  7 years of Pharmaceutical /Biotech/life sciences experience, 5 years of Clinical Trials experience and knowledge, Excellent Documentation, Communication, and Presentation Skills including PowerPoint

 

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort