Cutshort logo
k-means clustering Jobs in Bangalore (Bengaluru)

11+ k-means clustering Jobs in Bangalore (Bengaluru) | k-means clustering Job openings in Bangalore (Bengaluru)

Apply to 11+ k-means clustering Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest k-means clustering Job opportunities across top companies like Google, Amazon & Adobe.

icon
Banyan Data Services

at Banyan Data Services

1 recruiter
Sathish Kumar
Posted by Sathish Kumar
Bengaluru (Bangalore)
3 - 15 yrs
₹6L - ₹20L / yr
skill iconData Science
Data Scientist
skill iconMongoDB
skill iconJava
Big Data
+14 more

Senior Big Data Engineer 

Note:   Notice Period : 45 days 

Banyan Data Services (BDS) is a US-based data-focused Company that specializes in comprehensive data solutions and services, headquartered in San Jose, California, USA. 

 

We are looking for a Senior Hadoop Bigdata Engineer who has expertise in solving complex data problems across a big data platform. You will be a part of our development team based out of Bangalore. This team focuses on the most innovative and emerging data infrastructure software and services to support highly scalable and available infrastructure. 

 

It's a once-in-a-lifetime opportunity to join our rocket ship startup run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer that address next-gen data evolution challenges. 

 

 

Key Qualifications

 

·   5+ years of experience working with Java and Spring technologies

· At least 3 years of programming experience working with Spark on big data; including experience with data profiling and building transformations

· Knowledge of microservices architecture is plus 

· Experience with any NoSQL databases such as HBase, MongoDB, or Cassandra

· Experience with Kafka or any streaming tools

· Knowledge of Scala would be preferable

· Experience with agile application development 

· Exposure of any Cloud Technologies including containers and Kubernetes 

· Demonstrated experience of performing DevOps for platforms 

· Strong Skillsets in Data Structures & Algorithm in using efficient way of code complexity

· Exposure to Graph databases

· Passion for learning new technologies and the ability to do so quickly 

· A Bachelor's degree in a computer-related field or equivalent professional experience is required

 

Key Responsibilities

 

· Scope and deliver solutions with the ability to design solutions independently based on high-level architecture

· Design and develop the big data-focused micro-Services

· Involve in big data infrastructure, distributed systems, data modeling, and query processing

· Build software with cutting-edge technologies on cloud

· Willing to learn new technologies and research-orientated projects 

· Proven interpersonal skills while contributing to team effort by accomplishing related results as needed 

Read more
Publicis Sapient

at Publicis Sapient

10 recruiters
Mohit Singh
Posted by Mohit Singh
Bengaluru (Bangalore), Pune, Hyderabad, Gurugram, Noida
5 - 11 yrs
₹20L - ₹36L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+7 more

Publicis Sapient Overview:

The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution 

.

Job Summary:

As Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution

The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. You are also required to have hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms.


Role & Responsibilities:

Your role is focused on Design, Development and delivery of solutions involving:

• Data Integration, Processing & Governance

• Data Storage and Computation Frameworks, Performance Optimizations

• Analytics & Visualizations

• Infrastructure & Cloud Computing

• Data Management Platforms

• Implement scalable architectural models for data processing and storage

• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time mode

• Build functionality for data analytics, search and aggregation

Experience Guidelines:

Mandatory Experience and Competencies:

# Competency

1.Overall 5+ years of IT experience with 3+ years in Data related technologies

2.Minimum 2.5 years of experience in Big Data technologies and working exposure in at least one cloud platform on related data services (AWS / Azure / GCP)

3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline.

4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable

5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc

6.Well-versed and working knowledge with data platform related services on at least 1 cloud platform, IAM and data security


Preferred Experience and Knowledge (Good to Have):

# Competency

1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience

2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc

3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures

4.Performance tuning and optimization of data pipelines

5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality

6.Cloud data specialty and other related Big data technology certifications


Personal Attributes:

• Strong written and verbal communication skills

• Articulation skills

• Good team player

• Self-starter who requires minimal oversight

• Ability to prioritize and manage multiple tasks

• Process orientation and the ability to define and set up processes


Read more
Digital Banking Firm
Agency job
via Qrata by Prajakta Kulkarni
Bengaluru (Bangalore)
5 - 10 yrs
₹20L - ₹40L / yr
Apache Kafka
Hadoop
Spark
Apache Hadoop
Big Data
+5 more
Location - Bangalore (Remote for now)
 
Designation - Sr. SDE (Platform Data Science)
 
About Platform Data Science Team

The Platform Data Science team works at the intersection of data science and engineering. Domain experts develop and advance platforms, including the data platforms, machine learning platform, other platforms for Forecasting, Experimentation, Anomaly Detection, Conversational AI, Underwriting of Risk, Portfolio Management, Fraud Detection & Prevention and many more. We also are the Data Science and Analytics partners for Product and provide Behavioural Science insights across Jupiter.
 
About the role:

We’re looking for strong Software Engineers that can combine EMR, Redshift, Hadoop, Spark, Kafka, Elastic Search, Tensorflow, Pytorch and other technologies to build the next generation Data Platform, ML Platform, Experimentation Platform. If this sounds interesting we’d love to hear from you!
This role will involve designing and developing software products that impact many areas of our business. The individual in this role will have responsibility help define requirements, create software designs, implement code to these specifications, provide thorough unit and integration testing, and support products while deployed and used by our stakeholders.

Key Responsibilities:

Participate, Own & Influence in architecting & designing of systems
Collaborate with other engineers, data scientists, product managers
Build intelligent systems that drive decisions
Build systems that enable us to perform experiments and iterate quickly
Build platforms that enable scientists to train, deploy and monitor models at scale
Build analytical systems that drives better decision making
 

Required Skills:

Programming experience with at least one modern language such as Java, Scala including object-oriented design
Experience in contributing to the architecture and design (architecture, design patterns, reliability and scaling) of new and current systems
Bachelor’s degree in Computer Science or related field
Computer Science fundamentals in object-oriented design
Computer Science fundamentals in data structures
Computer Science fundamentals in algorithm design, problem solving, and complexity analysis
Experience in databases, analytics, big data systems or business intelligence products:
Data lake, data warehouse, ETL, ML platform
Big data tech like: Hadoop, Apache Spark
Read more
Molecular Connections

at Molecular Connections

4 recruiters
Molecular Connections
Posted by Molecular Connections
Bengaluru (Bangalore)
8 - 10 yrs
₹15L - ₹20L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+4 more
  1. Big data developer with 8+ years of professional IT experience with expertise in Hadoop ecosystem components in ingestion, Data modeling, querying, processing, storage, analysis, Data Integration and Implementing enterprise level systems spanning Big Data.
  2. A skilled developer with strong problem solving, debugging and analytical capabilities, who actively engages in understanding customer requirements.
  3. Expertise in Apache Hadoop ecosystem components like Spark, Hadoop Distributed File Systems(HDFS), HiveMapReduce, Hive, Sqoop, HBase, Zookeeper, YARN, Flume, Pig, Nifi, Scala and Oozie.
  4. Hands on experience in creating real - time data streaming solutions using Apache Spark core, Spark SQL & DataFrames, Kafka, Spark streaming and Apache Storm.
  5. Excellent knowledge of Hadoop architecture and daemons of Hadoop clusters, which include Name node,Data node, Resource manager, Node Manager and Job history server.
  6. Worked on both Cloudera and Horton works in Hadoop Distributions. Experience in managing Hadoop clustersusing Cloudera Manager tool.
  7. Well versed in installation, Configuration, Managing of Big Data and underlying infrastructure of Hadoop Cluster.
  8. Hands on experience in coding MapReduce/Yarn Programs using Java, Scala and Python for analyzing Big Data.
  9. Exposure to Cloudera development environment and management using Cloudera Manager.
  10. Extensively worked on Spark using Scala on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL/Oracle .
  11. Implemented Spark using PYTHON and utilizing Data frames and Spark SQL API for faster processing of data and handled importing data from different data sources into HDFS using Sqoop and performing transformations using Hive, MapReduce and then loading data into HDFS.
  12. Used Spark Data Frames API over Cloudera platform to perform analytics on Hive data.
  13. Hands on experience in MLlib from Spark which are used for predictive intelligence, customer segmentation and for smooth maintenance in Spark streaming.
  14. Experience in using Flume to load log files into HDFS and Oozie for workflow design and scheduling.
  15. Experience in optimizing MapReduce jobs to use HDFS efficiently by using various compression mechanisms.
  16. Working on creating data pipeline for different events of ingestion, aggregation, and load consumer response data into Hive external tables in HDFS location to serve as feed for tableau dashboards.
  17. Hands on experience in using Sqoop to import data into HDFS from RDBMS and vice-versa.
  18. In-depth Understanding of Oozie to schedule all Hive/Sqoop/HBase jobs.
  19. Hands on expertise in real time analytics with Apache Spark.
  20. Experience in converting Hive/SQL queries into RDD transformations using Apache Spark, Scala and Python.
  21. Extensive experience in working with different ETL tool environments like SSIS, Informatica and reporting tool environments like SQL Server Reporting Services (SSRS).
  22. Experience in Microsoft cloud and setting cluster in Amazon EC2 & S3 including the automation of setting & extending the clusters in AWS Amazon cloud.
  23. Extensively worked on Spark using Python on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL.
  24. Strong experience and knowledge of real time data analytics using Spark Streaming, Kafka and Flume.
  25. Knowledge in installation, configuration, supporting and managing Hadoop Clusters using Apache, Cloudera (CDH3, CDH4) distributions and on Amazon web services (AWS).
  26. Experienced in writing Ad Hoc queries using Cloudera Impala, also used Impala analytical functions.
  27. Experience in creating Data frames using PySpark and performing operation on the Data frames using Python.
  28. In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS and MapReduce Programming Paradigm, High Availability and YARN architecture.
  29. Establishing multiple connections to different Redshift clusters (Bank Prod, Card Prod, SBBDA Cluster) and provide the access for pulling the information we need for analysis. 
  30. Generated various kinds of knowledge reports using Power BI based on Business specification. 
  31. Developed interactive Tableau dashboards to provide a clear understanding of industry specific KPIs using quick filters and parameters to handle them more efficiently.
  32. Well Experience in projects using JIRA, Testing, Maven and Jenkins build tools.
  33. Experienced in designing, built, and deploying and utilizing almost all the AWS stack (Including EC2, S3,), focusing on high-availability, fault tolerance, and auto-scaling.
  34. Good experience with use-case development, with Software methodologies like Agile and Waterfall.
  35. Working knowledge of Amazon's Elastic Cloud Compute( EC2 ) infrastructure for computational tasks and Simple Storage Service ( S3 ) as Storage mechanism.
  36. Good working experience in importing data using Sqoop, SFTP from various sources like RDMS, Teradata, Mainframes, Oracle, Netezza to HDFS and performed transformations on it using Hive, Pig and Spark .
  37. Extensive experience in Text Analytics, developing different Statistical Machine Learning solutions to various business problems and generating data visualizations using Python and R.
  38. Proficient in NoSQL databases including HBase, Cassandra, MongoDB and its integration with Hadoop cluster.
  39. Hands on experience in Hadoop Big data technology working on MapReduce, Pig, Hive as Analysis tool, Sqoop and Flume data import/export tools.
Read more
Affine Analytics

at Affine Analytics

1 video
1 recruiter
Santhosh M
Posted by Santhosh M
Bengaluru (Bangalore)
4 - 8 yrs
₹10L - ₹30L / yr
Data Warehouse (DWH)
Informatica
ETL
Google Cloud Platform (GCP)
Airflow
+2 more

Objective

Data Engineer will be responsible for expanding and optimizing our data and database architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products


Roles and Responsibilities:

  • Should be comfortable in building and optimizing performant data pipelines which include data ingestion, data cleansing and curation into a data warehouse, database, or any other data platform using DASK/Spark.
  • Experience in distributed computing environment and Spark/DASK architecture.
  • Optimize performance for data access requirements by choosing the appropriate file formats (AVRO, Parquet, ORC etc) and compression codec respectively.
  • Experience in writing production ready code in Python and test, participate in code reviews to maintain and improve code quality, stability, and supportability.
  • Experience in designing data warehouse/data mart.
  • Experience with any RDBMS preferably SQL Server and must be able to write complex SQL queries.
  • Expertise in requirement gathering, technical design and functional documents.
  • Experience in Agile/Scrum practices.
  • Experience in leading other developers and guiding them technically.
  • Experience in deploying data pipelines using automated CI/CD approach.
  • Ability to write modularized reusable code components.
  • Proficient in identifying data issues and anomalies during analysis.
  • Strong analytical and logical skills.
  • Must be able to comfortably tackle new challenges and learn.
  • Must have strong verbal and written communication skills.


Required skills:

  • Knowledge on GCP
  • Expertise in Google BigQuery
  • Expertise in Airflow
  • Good Hands on SQL
  • Data warehousing concepts
Read more
Kloud9 Technologies
Bengaluru (Bangalore)
4 - 7 yrs
₹10L - ₹30L / yr
Google Cloud Platform (GCP)
PySpark
skill iconPython
skill iconScala

About Kloud9:

 

Kloud9 exists with the sole purpose of providing cloud expertise to the retail industry. Our team of cloud architects, engineers and developers help retailers launch a successful cloud initiative so you can quickly realise the benefits of cloud technology. Our standardised, proven cloud adoption methodologies reduce the cloud adoption time and effort so you can directly benefit from lower migration costs.

 

Kloud9 was founded with the vision of bridging the gap between E-commerce and cloud. The E-commerce of any industry is limiting and poses a huge challenge in terms of the finances spent on physical data structures.

 

At Kloud9, we know migrating to the cloud is the single most significant technology shift your company faces today. We are your trusted advisors in transformation and are determined to build a deep partnership along the way. Our cloud and retail experts will ease your transition to the cloud.

 

Our sole focus is to provide cloud expertise to retail industry giving our clients the empowerment that will take their business to the next level. Our team of proficient architects, engineers and developers have been designing, building and implementing solutions for retailers for an average of more than 20 years.

 

We are a cloud vendor that is both platform and technology independent. Our vendor independence not just provides us with a unique perspective into the cloud market but also ensures that we deliver the cloud solutions available that best meet our clients' requirements.


●    Overall 8+ Years of Experience in Web Application development.

●    5+ Years of development experience with JAVA8 , Springboot, Microservices and middleware

●    3+ Years of Designing Middleware using Node JS platform.

●    good to have 2+ Years of Experience in using NodeJS along with AWS Serverless platform.

●    Good Experience with Javascript / TypeScript, Event Loops, ExpressJS, GraphQL, SQL DB (MySQLDB), NoSQL DB(MongoDB) and YAML templates.

●    Good Experience with TDD Driven Development and Automated Unit Testing.

●    Good Experience with exposing and consuming Rest APIs in Java 8, Springboot platform and Swagger API contracts.

●    Good Experience in building NodeJS middleware performing Transformations, Routing, Aggregation, Orchestration and Authentication(JWT/OAUTH).

●    Experience supporting and working with cross-functional teams in a dynamic environment.

●    Experience working in Agile Scrum Methodology.

●    Very good Problem-Solving Skills.

●    Very good learner and passion for technology.

●     Excellent verbal and written communication skills in English

●     Ability to communicate effectively with team members and business stakeholders


Secondary Skill Requirements:

 

● Experience working with any of Loopback, NestJS, Hapi.JS, Sails.JS, Passport.JS


Why Explore a Career at Kloud9:

 

With job opportunities in prime locations of US, London, Poland and Bengaluru, we help build your career paths in cutting edge technologies of AI, Machine Learning and Data Science. Be part of an inclusive and diverse workforce that's changing the face of retail technology with their creativity and innovative solutions. Our vested interest in our employees translates to deliver the best products and solutions to our customers.

Read more
Amagi Media Labs

at Amagi Media Labs

3 recruiters
Rajesh C
Posted by Rajesh C
Bengaluru (Bangalore)
2 - 4 yrs
₹10L - ₹14L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+3 more
1. 2 to 4 years of experience
2. hands on experience using python, sql, tablaue
3. Data Analyst 
About Amagi (http://www.amagi.com/" target="_blank">www.amagi.com): Amagi is a market leader in cloud based media technology services for channel creation, distribution and ad monetization. Amagi’s cloud technology and managed services is used by TV networks, content owners, sports rights owners and pay TV / OTT platforms to create 24x7 linear channels for OTT and broadcast and deliver them to end consumers. Amagi’s pioneering and market leading cloud platform has won numerous accolades and is deployed in over 40 countries by 400+ TV networks. Customers of Amagi include A+E Networks, Comcast, Google, NBC Universal, Roku, Samsung and Warner Media. This is a unique and transformative opportunity to participate and grow a world-class technology company that changes the tenets of TV. Amagi is a private equity backed firm with investments from KKR (Emerald Media Fund), Premji Invest and MayField. Amagi has offices in New York, Los Angeles, London, New Delhi and Bangalore. LinkedIn page : https://www.linkedin.com/company/amagicorporation" target="_blank">https://www.linkedin.com/company/amagicorporation News: https://www.amagi.com/about/newsroom/amagi-clocks-120-yoy-quarterly-growth-as-channels-on-its-platform-grows-to-400" target="_blank">https://www.amagi.com/about/newsroom/amagi-clocks-120-yoy-quarterly-growth-as-channels-on-its-platform-grows-to-400/ Cofounder on Youtube: https://www.youtube.com/watch?v=EZ0nBT3ht0E" target="_blank">https://www.youtube.com/watch?v=EZ0nBT3ht0E
 

About Amagi & Growth


Amagi Corporation is a next-generation media technology company that provides cloud broadcast and targeted advertising solutions to broadcast TV and streaming TV platforms. Amagi enables content owners to launch, distribute and monetize live linear channels on Free-Ad-Supported TV and video services platforms. Amagi also offers 24x7 cloud managed services bringing simplicity, advanced automation, and transparency to the entire broadcast operations. Overall, Amagi supports 500+ channels on its platform for linear channel creation, distribution, and monetization with deployments in over 40 countries. Amagi has offices in New York (Corporate office), Los Angeles, and London, broadcast operations in New Delhi, and our Development & Innovation center in Bangalore. Amagi is also expanding in Singapore, Canada and other countries.

Amagi has seen phenomenal growth as a global organization over the last 3 years. Amagi has been a profitable firm for the last 2 years, and is now looking at investing in multiple new areas. Amagi has been backed by 4 investors - Emerald, Premji Invest, Nadathur and Mayfield. As of the fiscal year ending March 31, 2021, the company witnessed stellar growth in the areas of channel creation, distribution, and monetization, enabling customers to extend distribution and earn advertising dollars while saving up to 40% in cost of operations compared to traditional delivery models. Some key highlights of this include:

·   Annual revenue growth of 136%
·   44% increase in customers
·   50+ Free Ad Supported Streaming TV (FAST) platform partnerships and 100+ platform partnerships globally
·   250+ channels added to its cloud platform taking the overall tally to more than 500
·   Approximately 2 billion ad opportunities every month supporting OTT ad-insertion for 1000+ channels
·   60% increase in workforce in the US, UK, and India to support strong customer growth (current headcount being 360 full-time employees + Contractors)
·   5-10x growth in ad impressions among top customers
 
Over the last 4 years, Amagi has grown more than 400%. Amagi now has an aggressive growth plan over the next 3 years - to grow 10X in terms of Revenue. In terms of headcount, Amagi is looking to grow to more than 600 employees over the next 1 year. Amagi is building several key organizational processes to support the high growth journey and has gone digital in a big way.
 
Read more
MNC

at MNC

Agency job
via Fragma Data Systems by Priyanka U
Remote, Bengaluru (Bangalore)
2 - 6 yrs
₹6L - ₹15L / yr
Spark
Apache Kafka
PySpark
Internet of Things (IOT)
Real time media streaming

JD for IOT DE:

 

The role requires experience in Azure core technologies – IoT Hub/ Event Hub, Stream Analytics, IoT Central, Azure Data Lake Storage, Azure Cosmos, Azure Data Factory, Azure SQL Database, Azure HDInsight / Databricks, SQL data warehouse.

 

You Have:

  • Minimum 2 years of software development experience
  • Minimum 2 years of experience in IoT/streaming data pipelines solution development
  • Bachelor's and/or Master’s degree in computer science
  • Strong Consulting skills in data management including data governance, data quality, security, data integration, processing, and provisioning
  • Delivered data management projects with real-time/near real-time data insights delivery on Azure Cloud
  • Translated complex analytical requirements into the technical design including data models, ETLs, and Dashboards / Reports
  • Experience deploying dashboards and self-service analytics solutions on both relational and non-relational databases
  • Experience with different computing paradigms in databases such as In-Memory, Distributed, Massively Parallel Processing
  • Successfully delivered large scale IOT data management initiatives covering Plan, Design, Build and Deploy phases leveraging different delivery methodologies including Agile
  • Experience in handling telemetry data with Spark Streaming, Kafka, Flink, Scala, Pyspark, Spark SQL.
  • Hands-on experience on containers and Dockers
  • Exposure to streaming protocols like MQTT and AMQP
  • Knowledge of OT network protocols like OPC UA, CAN Bus, and similar protocols
  • Strong knowledge of continuous integration, static code analysis, and test-driven development
  • Experience in delivering projects in a highly collaborative delivery model with teams at onsite and offshore
  • Must have excellent analytical and problem-solving skills
  • Delivered change management initiatives focused on driving data platforms adoption across the enterprise
  • Strong verbal and written communications skills are a must, as well as the ability to work effectively across internal and external organizations
     

Roles & Responsibilities
 

You Will:

  • Translate functional requirements into technical design
  • Interact with clients and internal stakeholders to understand the data and platform requirements in detail and determine core Azure services needed to fulfill the technical design
  • Design, Develop and Deliver data integration interfaces in ADF and Azure Databricks
  • Design, Develop and Deliver data provisioning interfaces to fulfill consumption needs
  • Deliver data models on Azure platform, it could be on Azure Cosmos, SQL DW / Synapse, or SQL
  • Advise clients on ML Engineering and deploying ML Ops at Scale on AKS
  • Automate core activities to minimize the delivery lead times and improve the overall quality
  • Optimize platform cost by selecting the right platform services and architecting the solution in a cost-effective manner
  • Deploy Azure DevOps and CI CD processes
  • Deploy logging and monitoring across the different integration points for critical alerts

 

Read more
Busigence Technologies

at Busigence Technologies

1 video
1 recruiter
Seema Verma
Posted by Seema Verma
Bengaluru (Bangalore)
0 - 10 yrs
₹3L - ₹9L / yr
skill iconData Science
Big Data
skill iconMachine Learning (ML)
Statistical Analysis
skill iconDeep Learning
+3 more
APPLY LINK: http://bit.ly/2yipqSE Go through the entire job post thoroughly before pressing Apply. There is an eleven characters french word v*n*i*r*t*e mentioned somewhere in the whole text which is irrelevant to the context. You shall be required to enter this word while applying else application won't be considered submitted. ````````````````````````````````````````````````````````````````````````````````````````````````````` Aspirant - Data Science & AI Team: Sciences Full-Time, Trainee Bangaluru, India Relevant Exp: 0 - 10 Years Background: Top Tier institute Compensation: Above Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Busigence is a Decision Intelligence Company. We create decision intelligence products for real people by combining data, technology, business, and behavior enabling strengthened decisions. Scaling established startup by IIT alumni innovating & disrupting marketing domain through artificial intelligence. We bring those people onboard who are dedicated to deliver wisdom to humanity by solving the world’s most pressing problems differently thereby significantly impacting thousands of souls, everyday. We are a deep rooted organization with six years of success story having worked with folks from top tier background (IIT, NSIT, DCE, BITS, IIITs, NITs, IIMs, ISI etc.) maintaining an awesome culture with a common vision to build great data products. In past we have served fifty five customers and presently developing our second product, Robonate. First was emmoQ - an emotion intelligence platform. Third offering, H2HData, an innovation lab where we solve hard problems through data, science, & design. We work extensively & intensely on big data, data science, machine learning, deep learning, reinforcement learning, data analytics, natural language processing, cognitive computing, and business intelligence. First-and-Foremost Before you dive-in exploring this opportunity and press Apply, we wish you to evaluate yourself - We are looking for right candidate, not the best candidate. We love to work with someone who can mandatorily gel with our vision, beliefs, thoughts, methods, and values --- which are aligned with what can be expected in a true startup with ambitious goals. Skills are always secondary to us. Primarily, you must be someone who is not essentially looking for a job or career, rather starving for a challenge, you yourself probably don't know since when. A book can be written on what an applicant must have before joining a . For brevity, in nutshell, we need these three in you: 1. You must be [super sharp] (Just an analogue, but Irodov, Mensa, Feynman, Polya, ACM, NIPS, ICAAC, BattleCode, DOTA etc should have been your Done stuff. Can you relate solution 1 to problem 2? or Do you get confused even when solved similar problem in past? Are you able to grasp problem statement in one go? or get hanged?) 2. You must be [extremely energetic] (Do you raise eyebrows when asked to stretch your limits, both in terms of complexity or extra hours to put in? What comes first in your mind, let's finish it today or this can be done tomorrow too? Its Friday 10 PM at work -Tired?) 3. You must be [honourably honest] (Do you tell others what you think, or what they want to hear? Later is good for sales team for their customers, not for this role. Are you honest with your work? intrinsically with yourself first?) You know yourself the best. If not ask your loved ones and then decide. We clearly need exceedingly motivated people with entrepreneurial traits, not employee mindset - not at all. This is an immediate requirement. We shall have an accelerated interview process for fast closure - you would be required to be proactive and responsive. Real ROLE We are looking for students, graduates, and experienced folks with real passion for algorithms, computing, and analysis. You would be required to work with our sciences team on complex cases from data science, machine learning, and business analytics. Mandatory R1. Must know in-and-out of functional programming (https://docs.python.org/2/howto/functional.html) in Python with strong flair for data structures, linear algebra, & algorithms implementation. Only oops cannot not be accepted. R2. Must have soiled hands on methods, functions, and workarounds in NumPy, Pandas, Scikit-learn, SciPy, Stasmodels - collectively you should have implemented atleast 100 different techniques (we averaged out this figure with our past aspirants who have worked on this role) R3. Must have implemented complex mathematical logics through functional map-reduce framework in Python R4. Must have understanding on EDA cycle, machine learning algorithms, hyper-parameter optimization, ensemble learning, regularization, predictions, clustering, associations - at essential level R5. Must have solved atleast five problems through data science & machine learning. Mere coursera learning and/or Kaggle offline attempts shall not be accepted Preferred R6. Good to have required callibre to learn PySpark within four weeks once joined us R7. Good to have required callibre to grasp underlying business for a problem to be solved R8. Good to have understanding on CNNs, RNNs, MLP, Auto-Encoders - at basic level R9. Good to have solved atleast three problems through deep learning. Mere coursera learning and/or Kaggle offline attempts shall not be accepted R10. Good to have worked on pre-processing techniques for images, audio, and text - OpenCV, Librosa, NLTK R11. Good to have used pre-trained models - VGGNET, Inception, ResNet, WaveNet, Word2Vec Ideal YOU Y1. Degree in engineering, or any other data-heavy field at Bachelors level or above from a top tier institute Y2. Relevant experience of 0 - 10 years working on real-world problems in a reputed company or a proven startup Y3. You are a fanatical implementer who love to spend time with content, codes & workarounds, more than your loved ones Y4. You are true believer that human intelligence can be augmented through computer science & mathematics and your survival vinaigrette depends on getting the most from the data Y5. You are an entrepreneur mindset with ownership, intellectuality, & creativity as way to work. These are not fancy words, we mean it Actual WE W1. Real startup with Meaningful products W2. Revolutionary not just disruptive W3. Rules creators not followers W4. Small teams with real brains not herd of blockheads W5. Completely trust us and should be trusted back Why Us In addition to the regular stuff which every good startup offers – Lots of learning, Food, Parties, Open culture, Flexible working hours, and what not…. We offer you: You shall be working on our revolutionary products which are pioneer in their respective categories. This is a fact. We try real hard to hire fun loving crazy folks who are driven by more than a paycheck. You shall be working with creamiest talent on extremely challenging problems at most happening workplace. How to Apply You should apply online by clicking "Apply Now". For queries regarding an open position, please write to [email protected] For more information, visit http://www.busigence.com Careers: http://careers.busigence.com Research: http://research.busigence.com Jobs: http://careers.busigence.com/jobs/data-science Feel right fit for the position, mandatorily attach PDF resume highlighting your A. Key Skills B. Knowledge Inputs C. Major Accomplishments D. Problems Solved E. Submissions – Github/ StackOverflow/ Kaggle/ Euler Project etc. (if applicable) If you don't see this open position that interests you, join our Talent Pool and let us know how you can make a difference here. Referrals are more than welcome. Keep us in loop.
Read more
1CH

at 1CH

1 recruiter
Sathish Sukumar
Posted by Sathish Sukumar
Chennai, Bengaluru (Bangalore), Hyderabad, NCR (Delhi | Gurgaon | Noida), Mumbai, Pune
4 - 15 yrs
₹10L - ₹25L / yr
Data engineering
Data engineer
ETL
SSIS
ADF
+3 more
  • Expertise in designing and implementing enterprise scale database (OLTP) and Data warehouse solutions.
  • Hands on experience in implementing Azure SQL Database, Azure SQL Date warehouse (Azure Synapse Analytics) and big data processing using Azure Databricks and Azure HD Insight.
  • Expert in writing T-SQL programming for complex stored procedures, functions, views and query optimization.
  • Should be aware of Database development for both on-premise and SAAS Applications using SQL Server and PostgreSQL.
  • Experience in ETL and ELT implementations using Azure Data Factory V2 and SSIS.
  • Experience and expertise in building machine learning models using Logistic and linear regression, Decision tree  and Random forest Algorithms.
  • PolyBase queries for exporting and importing data into Azure Data Lake.
  • Building data models both tabular and multidimensional using SQL Server data tools.
  • Writing data preparation, cleaning and processing steps using Python, SCALA, and R.
  • Programming experience using python libraries NumPy, Pandas and Matplotlib.
  • Implementing NOSQL databases and writing queries using cypher.
  • Designing end user visualizations using Power BI, QlikView and Tableau.
  • Experience working with all versions of SQL Server 2005/2008/2008R2/2012/2014/2016/2017/2019
  • Experience using the expression languages MDX and DAX.
  • Experience in migrating on-premise SQL server database to Microsoft Azure.
  • Hands on experience in using Azure blob storage, Azure Data Lake Storage Gen1 and Azure Data Lake Storage Gen2.
  • Performance tuning complex SQL queries, hands on experience using SQL Extended events.
  • Data modeling using Power BI for Adhoc reporting.
  • Raw data load automation using T-SQL and SSIS
  • Expert in migrating existing on-premise database to SQL Azure.
  • Experience in using U-SQL for Azure Data Lake Analytics.
  • Hands on experience in generating SSRS reports using MDX.
  • Experience in designing predictive models using Python and SQL Server.
  • Developing machine learning models using Azure Databricks and SQL Server
Read more
Artivatic

at Artivatic

1 video
3 recruiters
Layak Singh
Posted by Layak Singh
Bengaluru (Bangalore)
3 - 7 yrs
₹6L - ₹14L / yr
skill iconPython
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
skill iconDeep Learning
Natural Language Processing (NLP)
+3 more
About Artivatic : Artivatic is a technology startup that uses AI/ML/Deep learning to build intelligent products & solutions for finance, healthcare & insurance businesses. It is based out of Bangalore with 25+ team focus on technology. The artivatic building is cutting edge solutions to enable 750 Millions plus people to get insurance, financial access, and health benefits with alternative data sources to increase their productivity, efficiency, automation power, and profitability, hence improving their way of doing business more intelligently & seamlessly.  - Artivatic offers lending underwriting, credit/insurance underwriting, fraud, prediction, personalization, recommendation, risk profiling, consumer profiling intelligence, KYC Automation & Compliance, healthcare, automated decisions, monitoring, claims processing, sentiment/psychology behaviour, auto insurance claims, travel insurance, disease prediction for insurance and more.   Job description We at artivatic are seeking for passionate, talented and research focused natural processing language engineer with strong machine learning and mathematics background to help build industry-leading technology. The ideal candidate will have research/implementation experience in modeling and developing NLP tools and have experience working with machine learning/deep learning algorithms. Roles and responsibilities Developing novel algorithms and modeling techniques to advance the state of the art in Natural Language Processing. Developing NLP based tools and solutions end to end. Working closely with R&D and Machine Learning engineers implementing algorithms that power user and developer-facing products.Be responsible for measuring and optimizing the quality of your algorithms Requirements Hands-on Experience building NLP models using different NLP libraries ad toolkit like NLTK, Stanford NLP etc Good understanding of Rule-based, Statistical and probabilistic NLP techniques. Good knowledge of NLP approaches and concepts like topic modeling, text summarization, semantic modeling, Named Entity recognition etc. Good understanding of Machine learning and Deep learning algorithms. Good knowledge of Data Structures and Algorithms. Strong programming skills in Python/Java/Scala/C/C++. Strong problem solving and logical skills. A go-getter kind of attitude with the willingness to learn new technologies. Well versed in software design paradigms and good development practices. Basic Qualifications Bachelors or Master degree in Computer Science, Mathematics or related field with specialization in natural language - Processing, Machine Learning or Deep Learning. Publication record in conferences/journals is a plus. 2+ years of working/research experience building NLP based solutions is preferred. If you feel that you are the ideal candidate & can bring a lot of values to our culture & company's vision, then please do apply. If your profile matches as per our requirements, you will hear from one of our team members. We are looking for someone who can be part of our Team not Employee. Job Perks Insurance, Travel compensation & others
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort