Cutshort logo
bodokimcom's logo

ui

bodokim jdss's profile picture
Posted by bodokim jdss
5 - 6 yrs
$0.0K - $0.1K / yr
Remote only
Skills
skill iconData Analytics
Data Visualization
PowerBI
Tableau
Qlikview
Spotfire
User Interface (UI) Development

xcygvhbjnmrtyguhij ctfguhijctyguhj fcgvhjfcgvhbjn xcygvhbjnmrtyguhij ctfguhijctyguhj fcgvhjfcgvhbjnxcygvhbjnmrtyguhij ctfguhijctyguhj fcgvhjfcgvhbjnxcygvhbjnmrtyguhij ctfguhijctyguhj fcgvhjfcgvhbjnxcygvhbjnmrtyguhij ctfguhijctyguhj fcgvhjfcgvhbjnxcygvhbjnmrtyguhij ctfguhijctyguhj fcgvhjfcgvhbjnxcygvhbjnmrtyguhij ctfguhijctyguhj fcgvhjfcgvhbjn

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About bodokimcom

Founded :
2023
Type
Size :
20-100
Stage :
Raised funding
About
N/A
Company social profiles
N/A

Similar jobs

Construction Tech Start-up
Agency job
via Merito by Jinita Sumaria
Kolkata
4 - 5 yrs
₹8L - ₹10L / yr
IT infrastructure
Business Development
Business Analysis
Business Intelligence (BI)
A/B Testing

About Company
Our client is a well-funded construction Tech Start-up by a renowned group.


Responsibilities
- Gather intelligence from key business leaders about needs and future growth
- Partner with the internal IT team to ensure each project meets a specific need and resolves successfully
- Assume responsibility for project tasks and ensure they are completed in a timely fashion
- Evaluate, test and recommend new opportunities for enhancing our software, hardware and IT processes
- Compile and distribute reports on application development and deployment
- Design and execute A/B testing procedures to extract data from test runs
- Evaluate and conclude data related to customer behavior
- Consult with the executive team and the IT department on the newest technology and its implications in the industry

Requirements :
- Bachelor's Degree in Software Development, Computer Engineering, Project Management or a related field
- 3+ years experience in technology development and deployment

Read more
Mumbai, Navi Mumbai
6 - 14 yrs
₹16L - ₹37L / yr
skill iconPython
PySpark
Data engineering
Big Data
Hadoop
+3 more

Role: Principal Software Engineer


We looking for a passionate Principle Engineer - Analytics to build data products that extract valuable business insights for efficiency and customer experience. This role will require managing, processing and analyzing large amounts of raw information and in scalable databases. This will also involve developing unique data structures and writing algorithms for the entirely new set of products. The candidate will be required to have critical thinking and problem-solving skills. The candidates must be experienced with software development with advanced algorithms and must be able to handle large volume of data. Exposure with statistics and machine learning algorithms is a big plus. The candidate should have some exposure to cloud environment, continuous integration and agile scrum processes.



Responsibilities:


• Lead projects both as a principal investigator and project manager, responsible for meeting project requirements on schedule

• Software Development that creates data driven intelligence in the products which deals with Big Data backends

• Exploratory analysis of the data to be able to come up with efficient data structures and algorithms for given requirements

• The system may or may not involve machine learning models and pipelines but will require advanced algorithm development

• Managing, data in large scale data stores (such as NoSQL DBs, time series DBs, Geospatial DBs etc.)

• Creating metrics and evaluation of algorithm for better accuracy and recall

• Ensuring efficient access and usage of data through the means of indexing, clustering etc.

• Collaborate with engineering and product development teams.


Requirements:


• Master’s or Bachelor’s degree in Engineering in one of these domains - Computer Science, Information Technology, Information Systems, or related field from top-tier school

• OR Master’s degree or higher in Statistics, Mathematics, with hands on background in software development.

• Experience of 8 to 10 year with product development, having done algorithmic work

• 5+ years of experience working with large data sets or do large scale quantitative analysis

• Understanding of SaaS based products and services.

• Strong algorithmic problem-solving skills

• Able to mentor and manage team and take responsibilities of team deadline.


Skill set required:


• In depth Knowledge Python programming languages

• Understanding of software architecture and software design

• Must have fully managed a project with a team

• Having worked with Agile project management practices

• Experience with data processing analytics and visualization tools in Python (such as pandas, matplotlib, Scipy, etc.)

• Strong understanding of SQL and querying to NoSQL database (eg. Mongo, Casandra, Redis

Read more
Series 'A' funded Silicon Valley based BI startup
Bengaluru (Bangalore)
4 - 6 yrs
₹30L - ₹45L / yr
Data engineering
Data Engineer
skill iconScala
Data Warehouse (DWH)
Big Data
+7 more
It is the leader in capturing technographics-powered buying intent, helps
companies uncover the 3% of active buyers in their target market. It evaluates
over 100 billion data points and analyzes factors such as buyer journeys, technology
adoption patterns, and other digital footprints to deliver market & sales intelligence.
Its customers have access to the buying patterns and contact information of
more than 17 million companies and 70 million decision makers across the world.

Role – Data Engineer

Responsibilities

 Work in collaboration with the application team and integration team to
design, create, and maintain optimal data pipeline architecture and data
structures for Data Lake/Data Warehouse.
 Work with stakeholders including the Sales, Product, and Customer Support
teams to assist with data-related technical issues and support their data
analytics needs.
 Assemble large, complex data sets from third-party vendors to meet business
requirements.
 Identify, design, and implement internal process improvements: automating
manual processes, optimizing data delivery, re-designing infrastructure for
greater scalability, etc.
 Build the infrastructure required for optimal extraction, transformation, and
loading of data from a wide variety of data sources using SQL, Elasticsearch,
MongoDB, and AWS technology.
 Streamline existing and introduce enhanced reporting and analysis solutions
that leverage complex data sources derived from multiple internal systems.

Requirements
 5+ years of experience in a Data Engineer role.
 Proficiency in Linux.
 Must have SQL knowledge and experience working with relational databases,
query authoring (SQL) as well as familiarity with databases including Mysql,
Mongo, Cassandra, and Athena.
 Must have experience with Python/Scala.
 Must have experience with Big Data technologies like Apache Spark.
 Must have experience with Apache Airflow.
 Experience with data pipeline and ETL tools like AWS Glue.
 Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
Read more
Cubera Tech India Pvt Ltd
Bengaluru (Bangalore), Chennai
5 - 8 yrs
Best in industry
Data engineering
Big Data
skill iconJava
skill iconPython
Hibernate (Java)
+10 more

Data Engineer- Senior

Cubera is a data company revolutionizing big data analytics and Adtech through data share value principles wherein the users entrust their data to us. We refine the art of understanding, processing, extracting, and evaluating the data that is entrusted to us. We are a gateway for brands to increase their lead efficiency as the world moves towards web3.

What are you going to do?

Design & Develop high performance and scalable solutions that meet the needs of our customers.

Closely work with the Product Management, Architects and cross functional teams.

Build and deploy large-scale systems in Java/Python.

Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

Create data tools for analytics and data scientist team members that assist them in building and optimizing their algorithms.

Follow best practices that can be adopted in Bigdata stack.

Use your engineering experience and technical skills to drive the features and mentor the engineers.

What are we looking for ( Competencies) :

Bachelor’s degree in computer science, computer engineering, or related technical discipline.

Overall 5 to 8 years of programming experience in Java, Python including object-oriented design.

Data handling frameworks: Should have a working knowledge of one or more data handling frameworks like- Hive, Spark, Storm, Flink, Beam, Airflow, Nifi etc.

Data Infrastructure: Should have experience in building, deploying and maintaining applications on popular cloud infrastructure like AWS, GCP etc.

Data Store: Must have expertise in one of general-purpose No-SQL data stores like Elasticsearch, MongoDB, Redis, RedShift, etc.

Strong sense of ownership, focus on quality, responsiveness, efficiency, and innovation.

Ability to work with distributed teams in a collaborative and productive manner.

Benefits:

Competitive Salary Packages and benefits.

Collaborative, lively and an upbeat work environment with young professionals.

Job Category: Development

Job Type: Full Time

Job Location: Bangalore

 

Read more
Bengaluru (Bangalore)
5 - 8 yrs
₹10L - ₹15L / yr
Statistical Analysis
PowerBI
skill iconData Analytics
azureML
skill iconData Science

In this role, we are looking for:

  • A problem-solving mindset with the ability to understand business challenges and how to apply your analytics expertise to solve them.
  • The unique person who can present complex mathematical solutions in a simple manner that most will understand, using data visualization techniques to tell a story with data.
  • An individual excited by innovation and new technology and eager to finds ways to employ these innovations in practice.
  • A team mentality, empowered by the ability to work with a diverse set of individuals.
  • A passion for data, with a particular emphasis on data visualization.

 

Basic Qualifications

 

  • A Bachelor’s degree in Data Science, Math, Statistics, Computer Science or related field with an emphasis on data analytics.
  • 5+ Years professional experience, preferably in a data analyst / data scientist role or similar, with proven results in a data analyst role.
  • 3+ Years professional experience in a leadership role guiding high-performing, data-focused teams with a track record of building and developing talent.
  • Proficiency in your statistics / analytics / visualization tool of choice, but preferably in the Microsoft Azure Suite, including PowerBI and/or AzureML.
Read more
Remote only
3 - 6 yrs
₹12L - ₹23L / yr
skill iconDeep Learning
Computer Vision
PyTorch
TensorFlow
skill iconPython
+7 more
This person MUST have:
- B.E Computer Science or equivalent.
- In-depth knowledge of machine learning algorithms and their applications including
practical experience with and theoretical understanding of algorithms for classification,
regression and clustering.
- Hands-on experience in computer vision and deep learning projects to solve real world
problems involving vision tasks such as object detection, Object tracking, instance
segmentation, activity detection, depth estimation, optical flow, multi-view geometry,
domain adaptation etc.
- Strong understanding of modern and traditional Computer Vision Algorithms.
- Experience in one of the Deep Learning Frameworks / Networks: PyTorch, TensorFlow,
Darknet (YOLO v4 v5), U-Net, Mask R-CNN, EfficientDet, BERT etc.
- Proficiency with CNN architectures such as ResNet, VGG, UNet, MobileNet, pix2pix,
and Cycle GAN.
- Experienced user of libraries such as OpenCV, scikit-learn, matplotlib and pandas.
- Ability to transform research articles into working solutions to solve real-world problems.
- High proficiency in Python programming knowledge.
- Familiar with software development practices/pipelines (DevOps- Kubernetes, docker
containers, CI/CD tools).
- Strong communication skills.
Read more
Bengaluru (Bangalore)
7 - 8 yrs
₹15L - ₹16L / yr
IBM DB2 DBA
IBM DB2
Solution Manager Diagnostics
IBM DB2 V11.5
Data Base Restores
+5 more
  • Expert knowledge of IBM DB2 V11.5 installations, configurations & administration in Linux systems.
  • Expert level knowledge in Database restores including redirected restore & backup concepts.
  • Excellent understanding of database performance monitoring techniques, fine-tuning, and able to perform performance checks & query optimization
  • Good knowledge of utilities like import, load & export under high volume conditions.
  • Ability to tune SQLs using db2advisor & db2explain.
  • Ability to troubleshoot database issues using db2diag, db2pd, db2dart, db2top tec.
  • Administration of database objects.
  • Capability to review & assess features or upgrades to existing components.
  • Experience in validating security aspects on a confidential database.
  • Hands-on experience in SSL communication setup, strong access control, and database hardening.
  • Experience in performing productive DB recovery and validating crash recovery.
  • Experience in handling incidents & opening DB2 support tickets.
  • Experience in deploying a special build DB2 version from DB2 support.
  • Worked in environments such as 12x5 supports of production database services.
  • Excellent problem-solving skills, analytical skills.
  • Validate security aspects on a confidential database
  • SSL communication setup, strong access control; database hardening
  • Validate Crash recovery
  • perform productive DB recovery
  • On incidents open db2 support tickets
  • Deploy a special build DB2 version from db2 support

Good to have:

  • Experience in handling application servers (WebSphere, WebLogic, Jboss, etc.) in highly available Production environments.
  • Experience in maintenance, patching, and installing updates on WebSphere Servers in the Production environment.
  • Able to handle installation/deployment of the product (JAR/EAR/WAR) independently.
  • Knowledge of ITIL concepts (Service operation & transition)

Soft skills:

  • Ability to work with the global team (co-located staffing).
  • Carries learning attitude, should be an individual contributor and must have excellent communication skills.
  • Support: 12/5 support is required (on a rotational basis).

 

 

 

Read more
Bengaluru (Bangalore)
3 - 5 yrs
₹12L - ₹14L / yr
Data Engineer
Big Data
skill iconPython
skill iconAmazon Web Services (AWS)
SQL
+2 more
  •  We are looking for a Data Engineer with 3-5 years experience in Python, SQL, AWS (EC2, S3, Elastic Beanstalk, API Gateway), and Java.
  • The applicant must be able to perform Data Mapping (data type conversion, schema harmonization) using Python, SQL, and Java.
  • The applicant must be familiar with and have programmed ETL interfaces (OAUTH, REST API, ODBC) using the same languages.
  • The company is looking for someone who shows an eagerness to learn and who asks concise questions when communicating with teammates.
Read more
PAGO Analytics India Pvt Ltd
Vijay Cheripally
Posted by Vijay Cheripally
Remote, Bengaluru (Bangalore), Mumbai, NCR (Delhi | Gurgaon | Noida)
2 - 8 yrs
₹8L - ₹15L / yr
skill iconPython
PySpark
Microsoft Windows Azure
SQL Azure
skill iconData Analytics
+6 more
Be an integral part of large scale client business development and delivery engagements
Develop the software and systems needed for end-to-end execution on large projects
Work across all phases of SDLC, and use Software Engineering principles to build scaled solutions
Build the knowledge base required to deliver increasingly complex technology projects


Object-oriented languages (e.g. Python, PySpark, Java, C#, C++ ) and frameworks (e.g. J2EE or .NET)
Database programming using any flavours of SQL
Expertise in relational and dimensional modelling, including big data technologies
Exposure across all the SDLC process, including testing and deployment
Expertise in Microsoft Azure is mandatory including components like Azure Data Factory, Azure Data Lake Storage, Azure SQL, Azure DataBricks, HD Insights, ML Service etc.
Good knowledge of Python and Spark are required
Good understanding of how to enable analytics using cloud technology and ML Ops
Experience in Azure Infrastructure and Azure Dev Ops will be a strong plus
Read more
DemandMatrix
at DemandMatrix
4 recruiters
Harwinder Singh
Posted by Harwinder Singh
Remote only
9 - 12 yrs
₹25L - ₹30L / yr
Big Data
PySpark
Apache Hadoop
Spark
skill iconPython
+3 more

Only a solid grounding in computer engineering, Unix, data structures and algorithms would enable you to meet this challenge.

7+ years of experience architecting, developing, releasing, and maintaining large-scale big data platforms on AWS or GCP

Understanding of how Big Data tech and NoSQL stores like MongoDB, HBase/HDFS, ElasticSearch synergize to power applications in analytics, AI and knowledge graphs

Understandingof how data processing models, data location patterns, disk IO, network IO, shuffling affect large scale text processing - feature extraction, searching etc

Expertise with a variety of data processing systems, including streaming, event, and batch (Spark,  Hadoop/MapReduce)

5+ years proficiency in configuring and deploying applications on Linux-based systems

5+ years of experience Spark - especially Pyspark for transforming large non-structured text data, creating highly optimized pipelines

Experience with RDBMS, ETL techniques and frameworks (Sqoop, Flume) and big data querying tools (Pig, Hive)

Stickler of world class best practices, uncompromising on the quality of engineering, understand standards and reference architectures and deep in Unix philosophy with appreciation of big data design patterns, orthogonal code design and functional computation models
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos