Cutshort logo
Generalized linear model Jobs in Bangalore (Bengaluru)

11+ Generalized linear model Jobs in Bangalore (Bengaluru) | Generalized linear model Job openings in Bangalore (Bengaluru)

Apply to 11+ Generalized linear model Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Generalized linear model Job opportunities across top companies like Google, Amazon & Adobe.

icon
Banyan Data Services

at Banyan Data Services

1 recruiter
Sathish Kumar
Posted by Sathish Kumar
Bengaluru (Bangalore)
3 - 15 yrs
₹6L - ₹20L / yr
skill iconData Science
Data Scientist
skill iconMongoDB
skill iconJava
Big Data
+14 more

Senior Big Data Engineer 

Note:   Notice Period : 45 days 

Banyan Data Services (BDS) is a US-based data-focused Company that specializes in comprehensive data solutions and services, headquartered in San Jose, California, USA. 

 

We are looking for a Senior Hadoop Bigdata Engineer who has expertise in solving complex data problems across a big data platform. You will be a part of our development team based out of Bangalore. This team focuses on the most innovative and emerging data infrastructure software and services to support highly scalable and available infrastructure. 

 

It's a once-in-a-lifetime opportunity to join our rocket ship startup run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer that address next-gen data evolution challenges. 

 

 

Key Qualifications

 

·   5+ years of experience working with Java and Spring technologies

· At least 3 years of programming experience working with Spark on big data; including experience with data profiling and building transformations

· Knowledge of microservices architecture is plus 

· Experience with any NoSQL databases such as HBase, MongoDB, or Cassandra

· Experience with Kafka or any streaming tools

· Knowledge of Scala would be preferable

· Experience with agile application development 

· Exposure of any Cloud Technologies including containers and Kubernetes 

· Demonstrated experience of performing DevOps for platforms 

· Strong Skillsets in Data Structures & Algorithm in using efficient way of code complexity

· Exposure to Graph databases

· Passion for learning new technologies and the ability to do so quickly 

· A Bachelor's degree in a computer-related field or equivalent professional experience is required

 

Key Responsibilities

 

· Scope and deliver solutions with the ability to design solutions independently based on high-level architecture

· Design and develop the big data-focused micro-Services

· Involve in big data infrastructure, distributed systems, data modeling, and query processing

· Build software with cutting-edge technologies on cloud

· Willing to learn new technologies and research-orientated projects 

· Proven interpersonal skills while contributing to team effort by accomplishing related results as needed 

Read more
Sahaj AI Software

at Sahaj AI Software

1 video
6 recruiters
Priya R
Posted by Priya R
Chennai, Bengaluru (Bangalore), Pune
8 - 14 yrs
Best in industry
Data engineering
skill iconPython
skill iconScala
databricks
Apache Spark
+3 more

About Us

Sahaj Software is an artisanal software engineering firm built on the values of trust, respect, curiosity, and craftsmanship, and delivering purpose-built solutions to drive data-led transformation for organisations. Our emphasis is on craft as we create purpose-built solutions, leveraging Data Engineering, Platform Engineering and Data Science with a razor-sharp focus to solve complex business and technology challenges and provide customers with a competitive edge


About The Role

As a Data Engineer, you’ll feel at home if you are hands-on, grounded, opinionated and passionate about delivering comprehensive data solutions that align with modern data architecture approaches. Your work will range from building a full data platform to building data pipelines or helping with data architecture and strategy. This role is ideal for those looking to have a large impact and huge scope for growth, while still being hands-on with technology. We aim to allow growth without becoming “post-technical”.


Responsibilities

  • Collaborate with Data Scientists and Engineers to deliver production-quality AI and Machine Learning systems
  • Build frameworks and supporting tooling for data ingestion from a complex variety of sources
  • Consult with our clients on data strategy, modernising their data infrastructure, architecture and technology
  • Model their data for increased visibility and performance
  • You will be given ownership of your work, and are encouraged to propose alternatives and make a case for doing things differently; our clients trust us and we manage ourselves.
  • You will work in short sprints to deliver working software
  • You will be working with other data engineers in Sahaj and work on building Data Engineering capability across the organisation


You can read more about what we do and how we think here: https://sahaj.ai/client-stories/


Skills you’ll need

  • Demonstrated experience as a Senior Data Engineer in complex enterprise environments
  • Deep understanding of technology fundamentals and experience with languages like Python, or functional programming languages like Scala
  • Demonstrated experience in the design and development of big data applications using tech stacks like Databricks, Apache Spark, HDFS, HBase and Snowflake
  • Commendable skills in building data products, by integrating large sets of data from hundreds of internal and external sources would be highly critical
  • A nuanced understanding of code quality, maintainability and practices like Test Driven Development
  • Ability to deliver an application end to end; having an opinion on how your code should be built, packaged and deployed using CI/CD
  • Understanding of Cloud platforms, DevOps, GitOps, and Containers


What will you experience as a culture at Sahaj?

At Sahaj, people's collective stands for a shared purpose where everyone owns the dreams, ideas, ideologies, successes, and failures of the organisation - a synergy that is rooted in the ethos of honesty, respect, trust, and equitability. At Sahaj, you will experience

  • Creativity
  • Ownership
  • Curiosity
  • Craftsmanship
  • A culture of trust, respect and transparency
  • Opportunity to collaborate with some of the finest minds in the industry
  • Work across multiple domains


What are the benefits of being at Sahaj?

  •  Unlimited leaves
  •  Life Insurance & Private Health insurance paid by Sahaj
  • Stock options
  • No hierarchy
  • Open Salaries 


Read more
AI-powered cloud-based SaaS solution provider

AI-powered cloud-based SaaS solution provider

Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
3 - 6 yrs
₹20L - ₹40L / yr
skill iconData Science
Weka
Data Scientist
Statistical Modeling
Mathematics
+5 more
Roles and Responsibilities
● Research and develop advanced statistical and machine learning models for
analysis of large-scale, high-dimensional data.
● Dig deeper into data, understand characteristics of data, evaluate alternate
models and validate hypothesis through theoretical and empirical approaches.
● Productize proven or working models into production quality code.
● Collaborate with product management, marketing and engineering teams in
Business Units to elicit & understand their requirements & challenges and
develop potential solutions
● Stay current with latest research and technology ideas; share knowledge by
clearly articulating results and ideas to key decision makers.
● File patents for innovative solutions that add to company's IP portfolio

Requirements
● 4 to 6 years of strong experience in data mining, machine learning and
statistical analysis.
● BS/MS/PhD in Computer Science, Statistics, Applied Math, or related areas
from Premier institutes (only IITs / IISc / BITS / Top NITs or top US university
should apply)
● Experience in productizing models to code in a fast-paced start-up
environment.
● Expertise in Python programming language and fluency in analytical tools
such as Matlab, R, Weka etc.
● Strong intuition for data and Keen aptitude on large scale data analysis

● Strong communication and collaboration skills.
Read more
Inviz Ai Solutions Private Limited
Shridhar Nayak
Posted by Shridhar Nayak
Bengaluru (Bangalore)
4 - 8 yrs
Best in industry
Spark
Hadoop
Big Data
Data engineering
PySpark
+8 more

InViz is Bangalore Based Startup helping Enterprises simplifying the Search and Discovery experiences for both their end customers as well as their internal users. We use state-of-the-art technologies in Computer Vision, Natural Language Processing, Text Mining, and other ML techniques to extract information/concepts from data of different formats- text, images, videos and make them easily discoverable through simple human-friendly touchpoints. 

 

TSDE - Data 

Data Engineer:

 

  • Should have total 3-6 Yrs of experience in Data Engineering.
  • Person should have experience in coding data pipeline on GCP. 
  • Prior experience on Hadoop systems is ideal as candidate may not have total GCP experience. 
  • Strong on programming languages like Scala, Python, Java. 
  • Good understanding of various data storage formats and it’s advantages. 
  • Should have exposure on GCP tools to develop end to end data pipeline for various scenarios (including ingesting data from traditional data bases as well as integration of API based data sources). 
  • Should have Business mindset to understand data and how it will be used for BI and Analytics purposes. 
  • Data Engineer Certification preferred 

 

Experience in Working with GCP tools like

 
 

Store :  CloudSQL , Cloud Storage, Cloud Bigtable,  Bigquery, Cloud Spanner, Cloud DataStore

 

Ingest :  Stackdriver, Pub/Sub, AppEngine, Kubernete Engine, Kafka, DataPrep , Micro services

 

Schedule : Cloud Composer

 

Processing: Cloud Dataproc, Cloud Dataflow, Cloud Dataprep

 

CI/CD - Bitbucket+Jenkinjs / Gitlab

 

Atlassian Suite

 

 

 

 .
Read more
Cloudbloom Systems LLP

at Cloudbloom Systems LLP

5 recruiters
Sahil Rana
Posted by Sahil Rana
Bengaluru (Bangalore)
6 - 10 yrs
₹13L - ₹25L / yr
skill iconMachine Learning (ML)
skill iconPython
skill iconData Science
Natural Language Processing (NLP)
Computer Vision
+3 more
Description
Duties and Responsibilities:
 Research and Develop Innovative Use Cases, Solutions and Quantitative Models
 Quantitative Models in Video and Image Recognition and Signal Processing for cloudbloom’s
cross-industry business (e.g., Retail, Energy, Industry, Mobility, Smart Life and
Entertainment).
 Design, Implement and Demonstrate Proof-of-Concept and Working Proto-types
 Provide R&D support to productize research prototypes.
 Explore emerging tools, techniques, and technologies, and work with academia for cutting-
edge solutions.
 Collaborate with cross-functional teams and eco-system partners for mutual business benefit.
 Team Management Skills
Academic Qualification
 7+ years of professional hands-on work experience in data science, statistical modelling, data
engineering, and predictive analytics assignments
 Mandatory Requirements: Bachelor’s degree with STEM background (Science, Technology,
Engineering and Management) with strong quantitative flavour
 Innovative and creative in data analysis, problem solving and presentation of solutions.
 Ability to establish effective cross-functional partnerships and relationships at all levels in a
highly collaborative environment
 Strong experience in handling multi-national client engagements
 Good verbal, writing & presentation skills
Core Expertise
 Excellent understanding of basics in mathematics and statistics (such as differential
equations, linear algebra, matrix, combinatorics, probability, Bayesian statistics, eigen
vectors, Markov models, Fourier analysis).
 Building data analytics models using Python, ML libraries, Jupyter/Anaconda and Knowledge
database query languages like  SQL
 Good knowledge of machine learning methods like k-Nearest Neighbors, Naive Bayes, SVM,
Decision Forests. 
 Strong Math Skills (Multivariable Calculus and Linear Algebra) - understanding the
fundamentals of Multivariable Calculus and Linear Algebra is important as they form the basis
of a lot of predictive performance or algorithm optimization techniques. 
 Deep learning : CNN, neural Network, RNN, tensorflow, pytorch, computervision,
 Large-scale data extraction/mining, data cleansing, diagnostics, preparation for Modeling
 Good applied statistical skills, including knowledge of statistical tests, distributions,
regression, maximum likelihood estimators, Multivariate techniques & predictive modeling
cluster analysis, discriminant analysis, CHAID, logistic & multiple regression analysis
 Experience with Data Visualization Tools like Tableau, Power BI, Qlik Sense that help to
visually encode data
 Excellent Communication Skills – it is incredibly important to describe findings to a technical
and non-technical audience
 Capability for continuous learning and knowledge acquisition.
 Mentor colleagues for growth and success
 Strong Software Engineering Background
 Hands-on experience with data science tools
Read more
Games 24x7

Games 24x7

Agency job
via zyoin by Shubha N
Bengaluru (Bangalore)
0 - 6 yrs
₹10L - ₹21L / yr
PowerBI
Big Data
Hadoop
Apache Hive
Business Intelligence (BI)
+5 more
Location: Bangalore
Work Timing: 5 Days A Week

Responsibilities include:

• Ensure right stakeholders gets right information at right time
• Requirement gathering with stakeholders to understand their data requirement
• Creating and deploying reports
• Participate actively in datamarts design discussions
• Work on both RDBMS as well as Big Data for designing BI Solutions
• Write code (queries/procedures) in SQL / Hive / Drill that is both functional and elegant,
following appropriate design patterns
• Design and plan BI solutions to automate regular reporting
• Debugging, monitoring and troubleshooting BI solutions
• Creating and deploying datamarts
• Writing relational and multidimensional database queries
• Integrate heterogeneous data sources into BI solutions
• Ensure Data Integrity of data flowing from heterogeneous data sources into BI solutions.

Minimum Job Qualifications:
• BE/B.Tech in Computer Science/IT from Top Colleges
• 1-5 years of experience in Datawarehousing and SQL
• Excellent Analytical Knowledge
• Excellent technical as well as communication skills
• Attention to even the smallest detail is mandatory
• Knowledge of SQL query writing and performance tuning
• Knowledge of Big Data technologies like Apache Hadoop, Apache Hive, Apache Drill
• Knowledge of fundamentals of Business Intelligence
• In-depth knowledge of RDBMS systems, Datawarehousing and Datamarts
• Smart, motivated and team oriented
Desirable Requirements
• Sound knowledge of software development in Programming (preferably Java )
• Knowledge of the software development lifecycle (SDLC) and models
Read more
SaaS Company strive to make selling fun with our SaaS incen

SaaS Company strive to make selling fun with our SaaS incen

Agency job
via Jobdost by Mamatha A
Bengaluru (Bangalore)
4 - 8 yrs
₹20L - ₹25L / yr
Relational Database (RDBMS)
skill iconPostgreSQL
MySQL
skill iconPython
Spark
+6 more

What is the role?

You will be responsible for developing and designing front-end web architecture, ensuring the responsiveness of applications, and working alongside graphic designers for web design features, among other duties. You will be responsible for the functional/technical track of the project

Key Responsibilities

  • Develop and automate large-scale, high-performance data processing systems (batch and/or streaming).
  • Build high-quality software engineering practices towards building data infrastructure and pipelines at scale.
  • Lead data engineering projects to ensure pipelines are reliable, efficient, testable, & maintainable
  • Optimize performance to meet high throughput and scale

What are we looking for?

  • 4+ years of relevant industry experience.
  • Working with data at the terabyte scale.
  • Experience designing, building and operating robust distributed systems.
  • Experience designing and deploying high throughput and low latency systems with reliable monitoring and logging practices.
  • Building and leading teams.
  • Working knowledge of relational databases like Postgresql/MySQL.
  • Experience with Python / Spark / Kafka / Celery
  • Experience working with OLTP and OLAP systems
  • Excellent communication skills, both written and verbal.
  • Experience working in cloud e.g., AWS, Azure or GCP

Whom will you work with?

You will work with a top-notch tech team, working closely with the architect and engineering head.

What can you look for?

A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits of being at this company

We are

We  strive to make selling fun with our SaaS incentive gamification product.  Company  is the #1 gamification software that automates and digitizes Sales Contests and Commission Programs. With game-like elements, rewards, recognitions, and complete access to relevant information, Company turbocharges an entire salesforce. Company also empowers Sales Managers with easy-to-publish game templates, leaderboards, and analytics to help accelerate performances and sustain growth.

We are a fun and high-energy team, with people from diverse backgrounds - united under the passion of getting things done. Rest assured that you shall get complete autonomy in your tasks and ample opportunities to develop your strengths.

Way forward

If you find this role exciting and want to join us in Bangalore, India, then apply by clicking below. Provide your details and upload your resume. All received resumes will be screened, shortlisted candidates will be requested to join for a discussion and on mutual alignment and agreement, we will proceed with hiring.

 
Read more
Fragma Data Systems

at Fragma Data Systems

8 recruiters
Evelyn Charles
Posted by Evelyn Charles
Remote, Bengaluru (Bangalore)
1 - 5 yrs
₹5L - ₹15L / yr
Spark
PySpark
Big Data
skill iconPython
SQL
+1 more
Must-Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skill
 
 
Technology Skills (Good to Have):
  • Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub.
  • Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. 
  • Designing and implementing data engineering, ingestion, and transformation functions
  • Azure Synapse or Azure SQL data warehouse
  • Spark on Azure is available in HD insights and data bricks
Read more
Graphene Services Pte Ltd
Swetha Seshadri
Posted by Swetha Seshadri
Bengaluru (Bangalore)
2 - 5 yrs
Best in industry
skill iconPython
MySQL
SQL
NOSQL Databases
PowerBI
+2 more

About Graphene  

Graphene is a Singapore Head quartered AI company which has been recognized as Singapore’s Best  

Start Up By Switzerland’s Seedstarsworld, and also been awarded as best AI platform for healthcare in Vivatech Paris. Graphene India is also a member of the exclusive NASSCOM Deeptech club. We are developing an AI plaform which is disrupting and replacing traditional Market Research with unbiased insights with a focus on healthcare, consumer goods and financial services.  

  

Graphene was founded by Corporate leaders from Microsoft and P&G, and works closely with the Singapore Government & Universities in creating cutting edge technology which is gaining traction with many Fortune 500 companies in India, Asia and USA.  

Graphene’s culture is grounded in delivering customer delight by recruiting high potential talent and providing an intense learning and collaborative atmosphere, with many ex-employees now hired by large companies across the world.  

  

Graphene has a 6-year track record of delivering financially sustainable growth and is one of the rare start-ups which is self-funded and is yet profitable and debt free. We have already created a strong bench strength of Singaporean leaders and are recruiting and grooming more talent with a focus on our US expansion.   

  

Job title: - Data Analyst 

Job Description  

Data Analyst responsible for storage, data enrichment, data transformation, data gathering based on data requests, testing and maintaining data pipelines. 

Responsibilities and Duties  

  • Managing end to end data pipeline from data source to visualization layer 
  • Ensure data integrity; Ability to pre-empt data errors 
  • Organized managing and storage of data 
  • Provide quality assurance of data, working with quality assurance analysts if necessary. 
  • Commissioning and decommissioning of data sets. 
  • Processing confidential data and information according to guidelines. 
  • Helping develop reports and analysis. 
  • Troubleshooting the reporting database environment and reports. 
  • Managing and designing the reporting environment, including data sources, security, and metadata. 
  • Supporting the data warehouse in identifying and revising reporting requirements. 
  • Supporting initiatives for data integrity and normalization. 
  • Evaluating changes and updates to source production systems. 
  • Training end-users on new reports and dashboards. 
  • Initiate data gathering based on data requirements 
  • Analyse the raw data to check if the requirement is satisfied 

 

Qualifications and Skills   

  

  • Technologies required: Python, SQL/ No-SQL database(CosmosDB)     
  • Experience required 2 – 5 Years. Experience in Data Analysis using Python 

  Understanding of software development life cycle   

  • Plan, coordinate, develop, test and support data pipelines, document, support for reporting dashboards (PowerBI) 
  • Automation steps needed to transform and enrich data.   
  • Communicate issues, risks, and concerns proactively to management. Document the process thoroughly to allow peers to assist with support as needed.   
  • Excellent verbal and written communication skills   
Read more
Prescience Decision Solutions
Shivakumar K
Posted by Shivakumar K
Bengaluru (Bangalore)
3 - 7 yrs
₹10L - ₹20L / yr
Big Data
ETL
Spark
Apache Kafka
Apache Spark
+4 more

The Data Engineer would be responsible for selecting and integrating Big Data tools and frameworks required. Would implement Data Ingestion & ETL/ELT processes

Required Experience, Skills and Qualifications:

  • Hands on experience on Big Data tools/technologies like Spark,  Databricks, Map Reduce, Hive, HDFS.
  • Expertise and excellent understanding of big data toolset such as Sqoop, Spark-streaming, Kafka, NiFi
  • Proficiency in any of the programming language: Python/ Scala/  Java with 4+ years’ experience
  • Experience in Cloud infrastructures like MS Azure, Data lake etc
  • Good working knowledge in NoSQL DB (Mongo, HBase, Casandra)
Read more
BYGRAD

at BYGRAD

1 video
1 recruiter
Bygrad Education
Posted by Bygrad Education
Bengaluru (Bangalore)
5 - 8 yrs
₹8L - ₹12L / yr
skill iconData Science
skill iconR Programming
skill iconPython

Job description

 

Position: Data Scientist

Location: Bangalore

Long Term Contract position

 

Remote Till Covid

Experience in applied data science, analytics, data storytelling.

  • Write well documented code that can be shared and used across teams, and can scale to be used in existing products. SQL, Advanced Python or R (descriptive / predictive models), Tableau Visualization. Working knowledge of Hadoop, BigQuery, Presto, Vertica
  • Apply your expertise in quantitative analysis, data mining, and the presentation of data to uncover unique actionable insights about customer service, health of public conversation and social media
  • Inform, influence, support, and execute analysis that feeds into one of our many analytics domains - Customer analytics, product analytics, business operation analytics, cost analytics, media analytics, people analytics
  • Select and deselect analytics priorities, insights and data based on ability to drive our desired outcomes
  • Own the end to end process, from initiation to deployment, and through ongoing communication and collaboration, sharing of results to partners and leadership
  • Mentor and create sense of community and learning environments for our global team of data analysts

Soft skills:

  • Ability to communicate findings clearly to both technical and non-technical audiences and to effectively collaborate within cross-functional teams
  • Working knowledge of agile framework and processes.
  • You should be comfortable managing work plans, timelines and milestones
  • You have a sense of urgency, move quickly and ship things

Bonus Points:

  • You're experienced in metrics and experiment-driven development
  • Experience in statistical methodology (multivariate, time-series, experimental design, data mining, etc.)

Company Info
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort