Cutshort logo
Exusia logo
Ab>initio, Big Data, Informatica, Tableau, Data Architect, Cognos, Microstrategy, Healther Business Analysts, Cloud etc.
Ab>initio, Big Data, Informatica, Tableau, Data Architect, Cognos, Microstrategy, Healther Business Analysts, Cloud etc.
Exusia's logo

Ab>initio, Big Data, Informatica, Tableau, Data Architect, Cognos, Microstrategy, Healther Business Analysts, Cloud etc.

Dhaval Upadhyay's profile picture
Posted by Dhaval Upadhyay
1 - 15 yrs
₹5L - ₹10L / yr
Pune, Chicago, Hyderabad, New York
Skills
Abinitio
Cognos
Microstrategy
Business Analysts
Hadoop
Informatica PowerCenter
Tableau
Exusia, Inc. (ex-OO-see-ah: translated from Greek to mean "Immensely Powerful and Agile") was founded with the objective of addressing a growing gap in the data innovation and engineering space as the next global leader in big data, analytics, data integration and cloud computing solutions. Exusia is a multinational, delivery centric firm that provides consulting and software as a service (SaaS) solutions to leading financial, government, healthcare, telecommunications and high technology organizations facing the largest data volumes and the most complex information management requirements. Exusia was founded in the United States in 2012 with headquarters in New York City and regional US offices in Chicago, Atlanta and Los Angeles. Exusia’s international presence continues to expand and is driven from Toronto (Canada), Sao Paulo (Brazil), Johannesburg (South Africa) and Pune (India). Our mission is to empower clients to grow revenue, optimize costs and satisfy regulatory requirements through the innovative use of information and analytics. We leverage a unique blend of strategy, intellectual property, technical execution and outsourcing to enable our clients to achieve significant returns on investment for their business, data and technology initiatives. At the core of our philosophy is a quality-first, trust-building, delivery-focused client relationship. The foundation of this relationship is the talent of our team. By recruiting and retaining the best talent in the industry, we are able to deliver to clients, whose data volumes and requirements number among the largest in the world, a broad range of customized, cutting edge solutions.
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos

About Exusia

Founded :
2012
Type :
Services
Size :
100-1000
Stage :
Profitable

About

Exusia is a multinational firm that provides consulting and software as a service solutions to leading organizations in healthcare, finance, telecommunications, consumer products, hospitality, supply chain, and high technology industries. It addresses the growing gap in the strategy and data engineering space as the next global leader in analytics, data engineering, and cloud computing solutions. Exusia is ISO 27001 certified and offers managed services to organizations facing the largest data volumes and the most complex data engineering requirements. Exusia was founded in 2012 in New York City and has its Americas headquarters in Miami, European headquarters in London, Africa headquarters in Johannesburg, and Asia headquarters in Pune. It has delivery centers in Pune, Gurugram, Chennai, Hyderabad, and Bangalore. Industries in which the company operates include healthcare, finance, telecommunications, consumer products, hospitality, supply chain, and high technology.
Read more

Connect with the team

Profile picture
Dhaval Upadhyay

Company social profiles

linkedintwitterfacebook

Similar jobs

Gipfel & Schnell Consultings Pvt Ltd
TanmayaKumar Pattanaik
Posted by TanmayaKumar Pattanaik
Bengaluru (Bangalore)
3 - 9 yrs
₹9L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+10 more

Qualifications & Experience:


▪ 2 - 4 years overall experience in ETLs, data pipeline, Data Warehouse development and database design

▪ Software solution development using Hadoop Technologies such as MapReduce, Hive, Spark, Kafka, Yarn/Mesos etc.

▪ Expert in SQL, worked on advanced SQL for at least 2+ years

▪ Good development skills in Java, Python or other languages

▪ Experience with EMR, S3

▪ Knowledge and exposure to BI applications, e.g. Tableau, Qlikview

▪ Comfortable working in an agile environment

Read more
Sigmoid
at Sigmoid
1 video
4 recruiters
Jayakumar AS
Posted by Jayakumar AS
Bengaluru (Bangalore), Hyderabad
2 - 5 yrs
₹12L - ₹15L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+5 more

Sigmoid works with a variety of clients from start-ups to fortune 500 companies. We are looking for a detailed oriented self-starter to assist our engineering and analytics teams in various roles as a Software Development Engineer.


This position will be a part of a growing team working towards building world class large scale Big Data architectures. This individual should have a sound understanding of programming principles, experience in programming in Java, Python or similar languages and can expect to

spend a majority of their time coding.


Location - Bengaluru and Hyderabad


Responsibilities:

● Good development practices

○ Hands on coder with good experience in programming languages like Java or

Python.

○ Hands-on experience on the Big Data stack like PySpark, Hbase, Hadoop, Mapreduce and ElasticSearch.

○ Good understanding of programming principles and development practices like checkin policy, unit testing, code deployment

○ Self starter to be able to grasp new concepts and technology and translate them into large scale engineering developments

○ Excellent experience in Application development and support, integration development and data management.

● Align Sigmoid with key Client initiatives

○ Interface daily with customers across leading Fortune 500 companies to understand strategic requirements


● Stay up-to-date on the latest technology to ensure the greatest ROI for customer &Sigmoid

○ Hands on coder with good understanding on enterprise level code

○ Design and implement APIs, abstractions and integration patterns to solve challenging distributed computing problems

○ Experience in defining technical requirements, data extraction, data

transformation, automating jobs, productionizing jobs, and exploring new big data technologies within a Parallel Processing environment


● Culture

○ Must be a strategic thinker with the ability to think unconventional /

out:of:box.

○ Analytical and data driven orientation.

○ Raw intellect, talent and energy are critical.


○ Entrepreneurial and Agile : understands the demands of a private, high growth company.

○ Ability to be both a leader and hands on "doer".


Qualifications: -

- Years of track record of relevant work experience and a computer Science or related technical discipline is required

- Experience with functional and object-oriented programming, Java must.

- hand-On knowledge in Map Reduce, Hadoop, PySpark, Hbase and ElasticSearch.

- Effective communication skills (both written and verbal)

- Ability to collaborate with a diverse set of engineers, data scientists and product managers

- Comfort in a fast-paced start-up environment


Preferred Qualification:

- Technical knowledge in Map Reduce, Hadoop & GCS Stack a plus.

- Experience in agile methodology

- Experience with database modeling and development, data mining and warehousing.

- Experience in architecture and delivery of Enterprise scale applications and capable in developing framework, design patterns etc. Should be able to understand and tackle technical challenges, propose comprehensive solutions and guide junior staff

- Experience working with large, complex data sets from a variety of sources

Read more
Staffbee Solutions INC
Remote only
6 - 10 yrs
₹1L - ₹1.5L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+11 more

Looking for freelance?

We are seeking a freelance Data Engineer with 7+ years of experience

 

Skills Required: Deep knowledge in any cloud (AWS, Azure , Google cloud), Data bricks, Data lakes, Data Ware housing Python/Scala , SQL, BI, and other analytics systems

 

What we are looking for

We are seeking an experienced Senior Data Engineer with experience in architecture, design, and development of highly scalable data integration and data engineering processes

 

  • The Senior Consultant must have a strong understanding and experience with data & analytics solution architecture, including data warehousing, data lakes, ETL/ELT workload patterns, and related BI & analytics systems
  • Strong in scripting languages like Python, Scala
  • 5+ years of hands-on experience with one or more of these data integration/ETL tools.
  • Experience building on-prem data warehousing solutions.
  • Experience with designing and developing ETLs, Data Marts, Star Schema
  • Designing a data warehouse solution using Synapse or Azure SQL DB
  • Experience building pipelines using Synapse or Azure Data Factory to ingest data from various sources
  • Understanding of integration run times available in Azure.
  • Advanced working SQL knowledge and experience working with relational databases, and queries. authoring (SQL) as well as working familiarity with a variety of database


Read more
Cloudera
at Cloudera
2 recruiters
Sushmitha Rengarajan
Posted by Sushmitha Rengarajan
Remote, Bengaluru (Bangalore)
5 - 20 yrs
₹1L - ₹44L / yr
skill iconJava
skill iconKubernetes
skill iconDocker
Hadoop
Apache Kafka
+3 more

 

Senior Software Engineer - 221254.

 

We (the Software Engineer team) are looking for a motivated, experienced person with a data driven approach to join our Distribution Team in Budapest or Szeged to help design, execute and improve our test sets and infrastructure for producing high-quality Hadoop software.

 

A Day in the life

 

You will be part of a team that makes sure our releases are predictable and deliver high value to the customer. This team is responsible for automating and maintaining our test harness, and making test results reliable and repeatable.

 

You will…

•work on making our distributed software stack more resilient to high-scale endurance runs and customer simulations

•provide valuable fixes to our product development teams to the issues you’ve found during exhaustive test runs

•work with product and field teams to make sure our customer simulations match the expectations and can provide valuable feedback to our customers

•work with amazing people - We are a fun & smart team, including many of the top luminaries in Hadoop and related open source communities. We frequently interact with the research community, collaborate with engineers at other top companies & host cutting edge researchers for tech talks.

•do innovative work - Cloudera pushes the frontier of big data & distributed computing, as our track record shows. We work on high-profile open source projects, interacting daily with engineers at other exciting companies, speaking at meet-ups, etc.

•be a part of a great culture - Transparent and open meritocracy. Everybody is always thinking of better ways to do things, and coming up with ideas that make a difference. We build our culture to be the best workplace in our careers.

 

You have...

•strong knowledge in at least 1 of the following languages: Java / Python / Scala / C++ / C#

•hands-on experience with at least 1 of the following configuration management tools: Ansible, Chef, Puppet, Salt

•confidence with Linux environments

•ability to identify critical weak spots in distributed software systems

•experience in developing automated test cases and test plans

•ability to deal with distributed systems

•solid interpersonal skills conducive to a distributed environment

•ability to work independently on multiple tasks

•self-driven & motivated, with a strong work ethic and a passion for problem solving

•innovate and automate and break the code

The right person in this role has an opportunity to make a huge impact at Cloudera and add value to our future decisions. If this position has piqued your interest and you have what we described - we invite you to apply! An adventure in data awaits.

 

Read more
Couture.ai
at Couture.ai
4 recruiters
Deleted User
Posted by Deleted User
Bengaluru (Bangalore)
2 - 5 yrs
₹5L - ₹10L / yr
Big Data
Hadoop
DevOps
Apache Spark
Spark
+5 more
Skills Requirements
 Knowledge of Hadoop ecosystem installation, initial-configuration and performance tuning.
 Expert with Apache Ambari, Spark, Unix Shell scripting, Kubernetes and Docker
 Knowledge on python would be desirable.
 Experience with HDP Manager/clients and various dashboards.
 Understanding on Hadoop Security (Kerberos, Ranger and Knox) and encryption and Data masking.
 Experience with automation/configuration management using Chef, Ansible or an equivalent.
 Strong experience with any Linux distribution.
 Basic understanding of network technologies, CPU, memory and storage.
 Database administration a plus.
Qualifications and Education Requirements
 2 to 4 years of experience with and detailed knowledge of Core Hadoop Components solutions and
dashboards running on Big Data technologies such as Hadoop/Spark.
 Bachelor degree or equivalent in Computer Science or Information Technology or related fields.
Read more
Hiring for one of the MNC for India location
Hiring for one of the MNC for India location
Agency job
via Natalie Consultants by Rahul Kumar
Gurugram, Pune, Bengaluru (Bangalore), Delhi, Noida, Ghaziabad, Faridabad
2 - 9 yrs
₹8L - ₹20L / yr
skill iconPython
Hadoop
Big Data
Spark
Data engineering
+3 more

Key Responsibilities : ( Data Developer Python, Spark)

Exp : 2 to 9 Yrs 

Development of data platforms, integration frameworks, processes, and code.

Develop and deliver APIs in Python or Scala for Business Intelligence applications build using a range of web languages

Develop comprehensive automated tests for features via end-to-end integration tests, performance tests, acceptance tests and unit tests.

Elaborate stories in a collaborative agile environment (SCRUM or Kanban)

Familiarity with cloud platforms like GCP, AWS or Azure.

Experience with large data volumes.

Familiarity with writing rest-based services.

Experience with distributed processing and systems

Experience with Hadoop / Spark toolsets

Experience with relational database management systems (RDBMS)

Experience with Data Flow development

Knowledge of Agile and associated development techniques including:

Read more
Maveric Systems
at Maveric Systems
3 recruiters
Rashmi Poovaiah
Posted by Rashmi Poovaiah
Bengaluru (Bangalore), Chennai, Pune
4 - 10 yrs
₹8L - ₹15L / yr
Big Data
Hadoop
Spark
Apache Kafka
HiveQL
+2 more

Role Summary/Purpose:

We are looking for a Developer/Senior Developers to be a part of building advanced analytical platform leveraging Big Data technologies and transform the legacy systems. This role is an exciting, fast-paced, constantly changing and challenging work environment, and will play an important role in resolving and influencing high-level decisions.

 

Requirements:

  • The candidate must be a self-starter, who can work under general guidelines in a fast-spaced environment.
  • Overall minimum of 4 to 8 year of software development experience and 2 years in Data Warehousing domain knowledge
  • Must have 3 years of hands-on working knowledge on Big Data technologies such as Hadoop, Hive, Hbase, Spark, Kafka, Spark Streaming, SCALA etc…
  • Excellent knowledge in SQL & Linux Shell scripting
  • Bachelors/Master’s/Engineering Degree from a well-reputed university.
  • Strong communication, Interpersonal, Learning and organizing skills matched with the ability to manage stress, Time, and People effectively
  • Proven experience in co-ordination of many dependencies and multiple demanding stakeholders in a complex, large-scale deployment environment
  • Ability to manage a diverse and challenging stakeholder community
  • Diverse knowledge and experience of working on Agile Deliveries and Scrum teams.

 

Responsibilities

  • Should works as a senior developer/individual contributor based on situations
  • Should be part of SCRUM discussions and to take requirements
  • Adhere to SCRUM timeline and deliver accordingly
  • Participate in a team environment for the design, development and implementation
  • Should take L3 activities on need basis
  • Prepare Unit/SIT/UAT testcase and log the results
  • Co-ordinate SIT and UAT Testing. Take feedbacks and provide necessary remediation/recommendation in time.
  • Quality delivery and automation should be a top priority
  • Co-ordinate change and deployment in time
  • Should create healthy harmony within the team
  • Owns interaction points with members of core team (e.g.BA team, Testing and business team) and any other relevant stakeholders
Read more
Dremio
at Dremio
4 recruiters
Maharaja Subramanian (CW)
Posted by Maharaja Subramanian (CW)
Remote, Bengaluru (Bangalore), Hyderabad
3 - 10 yrs
₹15L - ₹65L / yr
skill iconJava
skill iconC++
Microservices
Algorithms
Data Structures
+10 more

Be Part Of Building The Future

Dremio is the Data Lake Engine company. Our mission is to reshape the world of analytics to deliver on the promise of data with a fundamentally new architecture, purpose-built for the exploding trend towards cloud data lake storage such as AWS S3 and Microsoft ADLS. We dramatically reduce and even eliminate the need for the complex and expensive workarounds that have been in use for decades, such as data warehouses (whether on-premise or cloud-native), structural data prep, ETL, cubes, and extracts. We do this by enabling lightning-fast queries directly against data lake storage, combined with full self-service for data users and full governance and control for IT. The results for enterprises are extremely compelling: 100X faster time to insight; 10X greater efficiency; zero data copies; and game-changing simplicity. And equally compelling is the market opportunity for Dremio, as we are well on our way to disrupting a $25BN+ market.

About the Role

The Dremio India team owns the DataLake Engine along with Cloud Infrastructure and services that power it. With focus on next generation data analytics supporting modern table formats like Iceberg, Deltalake, and open source initiatives such as Apache Arrow, Project Nessie and hybrid-cloud infrastructure, this team provides various opportunities to learn, deliver, and grow in career. We are looking for innovative minds with experience in leading and building high quality distributed systems at massive scale and solving complex problems.

Responsibilities & ownership

  • Lead, build, deliver and ensure customer success of next-generation features related to scalability, reliability, robustness, usability, security, and performance of the product.
  • Work on distributed systems for data processing with efficient protocols and communication, locking and consensus, schedulers, resource management, low latency access to distributed storage, auto scaling, and self healing.
  • Understand and reason about concurrency and parallelization to deliver scalability and performance in a multithreaded and distributed environment.
  • Lead the team to solve complex and unknown problems 
  • Solve technical problems and customer issues with technical expertise
  • Design and deliver architectures that run optimally on public clouds like  GCP, AWS, and Azure
  • Mentor other team members for high quality and design 
  • Collaborate with Product Management to deliver on customer requirements and innovation
  • Collaborate with Support and field teams to ensure that customers are successful with Dremio

Requirements

  • B.S./M.S/Equivalent in Computer Science or a related technical field or equivalent experience
  • Fluency in Java/C++ with 8+ years of experience developing production-level software
  • Strong foundation in data structures, algorithms, multi-threaded and asynchronous programming models, and their use in developing distributed and scalable systems
  • 5+ years experience in developing complex and scalable distributed systems and delivering, deploying, and managing microservices successfully
  • Hands-on experience  in query processing or optimization, distributed systems, concurrency control, data replication, code generation, networking, and storage systems
  • Passion for quality, zero downtime upgrades, availability, resiliency, and uptime of the platform
  • Passion for learning and delivering using latest technologies
  • Ability to solve ambiguous, unexplored, and cross-team problems effectively
  • Hands on experience of working projects on AWS, Azure, and Google Cloud Platform 
  • Experience with containers and Kubernetes for orchestration and container management in private and public clouds (AWS, Azure, and Google Cloud) 
  • Understanding of distributed file systems such as  S3, ADLS, or HDFS
  • Excellent communication skills and affinity for collaboration and teamwork
  • Ability to work individually and collaboratively with other team members
  • Ability to scope and plan solution for  big problems and mentors others on the same
  • Interested and motivated to be part of a fast-moving startup with a fun and accomplished team
Read more
Arque Capital
at Arque Capital
2 recruiters
Hrishabh Sanghvi
Posted by Hrishabh Sanghvi
Mumbai
5 - 11 yrs
₹15L - ₹30L / yr
skill iconC++
Big Data
Technical Architecture
Cloud Computing
skill iconPython
+4 more
ABOUT US: Arque Capital is a FinTech startup working with AI in Finance in domains like Asset Management (Hedge Funds, ETFs and Structured Products), Robo Advisory, Bespoke Research, Alternate Brokerage, and other applications of Technology & Quantitative methods in Big Finance. PROFILE DESCRIPTION: 1. Get the "Tech" in order for the Hedge Fund - Help answer fundamentals of technology blocks to be used, choice of certain platform/tech over other, helping team visualize product with the available resources and assets 2. Build, manage, and validate a Tech Roadmap for our Products 3. Architecture Practices - At startups, the dynamics changes very fast. Making sure that best practices are defined and followed by team is very important. CTO’s may have to garbage guy and clean the code time to time. Making reviews on Code Quality is an important activity that CTO should follow. 4. Build progressive learning culture and establish predictable model of envisioning, designing and developing products 5. Product Innovation through Research and continuous improvement 6. Build out the Technological Infrastructure for the Hedge Fund 7. Hiring and building out the Technology team 8. Setting up and managing the entire IT infrastructure - Hardware as well as Cloud 9. Ensure company-wide security and IP protection REQUIREMENTS: Computer Science Engineer from Tier-I colleges only (IIT, IIIT, NIT, BITS, DHU, Anna University, MU) 5-10 years of relevant Technology experience (no infra or database persons) Expertise in Python and C++ (3+ years minimum) 2+ years experience of building and managing Big Data projects Experience with technical design & architecture (1+ years minimum) Experience with High performance computing - OPTIONAL Experience as a Tech Lead, IT Manager, Director, VP, or CTO 1+ year Experience managing Cloud computing infrastructure (Amazon AWS preferred) - OPTIONAL Ability to work in an unstructured environment Looking to work in a small, start-up type environment based out of Mumbai COMPENSATION: Co-Founder status and Equity partnership
Read more
Factspan
at Factspan
3 recruiters
Pradeepan Jayachandran
Posted by Pradeepan Jayachandran
Detroit, USA
3 - 7 yrs
₹5L - ₹10L / yr
SQL
Advance Excel
R
Tableau
Factspan Overview: Established in 2012, Factspan is a leading pure play analytics company with expertise in converting data into actionable business insights. Our teams of mathematicians, statisticians and data scientists are experienced in various aspects of decision sciences. We complement such horizontal skills with knowledge of our clients industries. As needed, we also supplement our US based teams with resources in cost effective locations such as India. Headquartered in Seattle, we also have offices in Seattle, San Francisco Bay Area and Bangalore. Our clients include Fortune 100 companies in Retail, Financial Services, Insurance and Technology industries. Our Vision is to be the foremost player in the business analytics space. Roles and Responsibilities: • Interacting with client/onsite team to understand client's business problems, defining, executing and delivering the analytical solution to the business problems • Conceptualize the analytical methodology for business problems and propose an end-to-end process flowchart for the project/business problem with possible set of statistical solutions. • Responsible for discussing and finalizing the above methodology with the client, getting their buy-in, and guiding the offshore resources on the selected procedures. • Communicate expectations and establish deadlines in agreement with client and delivery team. • Understand the nature of the data and how it relates to the problem at hand. • Manage operational workflow and ensure seamless delivery for both quick turnaround analysis and long term projects. • Ensure delivery is smooth and error free. • Communicate analysis and recommendations to all levels of the business through written and verbal communications. Follow up with client for any modifications and feedback. • Carry out all aspects of project management including work planning, estimation, scope refinement, quality and delivery • Communicate difficult messages to the client with persuasiveness and sensitivity so that delivery and business is not impacted • Have excellent skills in making developing power-point presentations and sales collateral. • Being part of sales presentations and contributing to RFPs/RFIs thereby being able to win new business from existing and new clients. Qualification & Experience: • Experience into Marketing/Retail/Social Media/Supply Chain/Loyality Analytics • Bachelor’s or Master’s degree in Economics/Statistics/Econometrics/Mathematics/Engineering and Operations Research • Ability to think: Possess strong analytical/logical thinking skills. • Strong structured problem solving skills • Experience with statistical software such as SQL, R, etc • Excellent skills in writing SQL, PLSQL & stored procedures. • Experience working with US clients • Excellent MS skills to present data tables and charts • Proven experience in MS Excel, VBA & Tableau for reporting and visualization of data analysis. Behavioral Attributes: • Analytical/Strategic/Conceptual thinking • Excellent Verbal/Written communication • Excellent Collaboration/Interpersonal/Presentation skills • Strong Project Management skills Reference Company Links: • www.factspan.com • https://www.linkedin.com/company/factspan-inchttps://www.facebook.com/factspan
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos