Cutshort logo
Intelliswift Software logo
Big Data Developer
Intelliswift Software's logo

Big Data Developer

Pratish Mishra's profile picture
Posted by Pratish Mishra
4 - 8 yrs
₹8L - ₹17L / yr
Chennai
Skills
Big Data
Spark
Scala
SQL
Greetings from Intelliswift! Intelliswift Software Inc. is a premier software solutions and Services Company headquartered in the Silicon Valley, with offices across the United States, India, and Singapore. The company has a proven track record of delivering results through its global delivery centers and flexible engagement models for over 450 brands ranging from Fortune 100 to growing companies. Intelliswift provides a variety of services including Enterprise Applications, Mobility, Big Data / BI, Staffing Services, and Cloud Solutions. Growing at an outstanding rate, it has been recognized as the second largest private IT Company in the East Bay. Domains: IT, Retail, Pharma, Healthcare, BFSI, and Internet & E-commerce website https://www.intelliswift.com/ Experience: 4-8 Years Job Location: Chennai Job Description: Skills: Spark, Scala, Big data, Hive · Strong Working experience in Spark, Scala, big data, h base and hive. · Should have good working experience in SQL and Spark SQL. · Good to have knowledge or experience in Teradata. · Familiar with General engineering Git, jenkins, sbt, maven.
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Intelliswift Software

Founded :
2001
Type
Size :
100-1000
Stage :
Profitable
About
Intelliswift Software, Inc., is a premier onsite/offshore software solutions and services company, headquartered in the Silicon Valley with offices across the United States, India, and Singapore. We are recognized as the second largest private IT Company and ranked among the 50 fastest-growing private companies in the East Bay. Expanding at an exponential rate and yet agile in its approach, Intelliswift has advanced far ahead from where it had started. After spreading deep roots as a staffing solutions leader, the company expanded and acquired a software development center in Mumbai, India in 2012. Intelliswift continues to expand its global presence by opening offices in Singapore and Bengaluru, India. Some noteworthy achievements include: transforming a bank with efficient business solutions involving the Core Banking System (CBS), creating an E-commerce store for a Retail business giant with the latest security solutions, and building a database designer with a hierarchical system for communicating in a hospital or clinic. These achievements and possibilities emerged because our clients’ trust in our expertise that includes a vast array of technological development services. Our exceptional leaders and engineers work on software applications, web development, mobility solutions, and cloud-based services. Our best practices and progressive work culture has also maneuvered us into R&D of augmented reality. We work closely with our clients to help them successfully build and execute their most critical strategies. Intelliswift has a proven track record of delivering results through its global delivery centers and flexible engagement models for over 450 brands ranging from Fortune 100 to growing companies. Intelliswift provides a variety of services including Enterprise Applications, Mobility, Big Data/BI, Staffing Services, and Cloud Solutions. Our enterprise clientele includes leading companies like eBay, PayPal, DIRECTV, Expedia, Oracle, Cisco, and many more. The talent nurtured at Intelliswift, combined with our global network make us your best ally. We provide support for a variety of organization sizes, and are committed to our clients to utilize available resources, to ensure that our services are tailored to meet all client specific requirements.
Read more
Connect with the team
Profile picture
Anitha G
Profile picture
Bhavani Thannidi
Profile picture
Shabana K
Profile picture
Nisha R
Profile picture
Arokiaraj Christopher
Profile picture
Matada Anusha
Profile picture
Tejas N
Profile picture
Pratish Mishra
Profile picture
Radhika Hegde
Profile picture
Manmeet Singh
Profile picture
Nayan Dhanore
Profile picture
smitha mp
Company social profiles
blog

Similar jobs

Matellio India Private Limited
Harshit Sharma
Posted by Harshit Sharma
Remote only
8 - 15 yrs
₹10L - ₹27L / yr
Machine Learning (ML)
Data Science
Natural Language Processing (NLP)
Computer Vision
Deep Learning
+7 more

Responsibilities include: 

  • Convert the machine learning models into application program interfaces (APIs) so that other applications can use it
  • Build AI models from scratch and help the different components of the organization (such as product managers and stakeholders) understand what results they gain from the model
  • Build data ingestion and data transformation infrastructure
  • Automate infrastructure that the data science team uses
  • Perform statistical analysis and tune the results so that the organization can make better-informed decisions
  • Set up and manage AI development and product infrastructure
  • Be a good team player, as coordinating with others is a must
Read more
Simpl
at Simpl
3 recruiters
Elish Ismael
Posted by Elish Ismael
Bengaluru (Bangalore)
3 - 10 yrs
₹10L - ₹50L / yr
Java
Apache Spark
Big Data
Hadoop
Apache Hive
About Simpl
The thrill of working at a start-up that is starting to scale massively is something else. Simpl (FinTech startup of the year - 2020) was formed in 2015 by Nitya Sharma, an investment banker from Wall Street and Chaitra Chidanand, a tech executive from the Valley, when they teamed up with a very clear mission - to make money simple so that people can live well and do amazing things. Simpl is the payment platform for the mobile-first world, and we’re backed by some of the best names in fintech globally (folks who have invested in Visa, Square and Transferwise), and
has Joe Saunders, Ex Chairman and CEO of Visa as a board member.

Everyone at Simpl is an internal entrepreneur who is given a lot of bandwidth and resources to create the next breakthrough towards the long term vision of “making money Simpl”. Our first product is a payment platform that lets people buy instantly, anywhere online, and pay later. In
the background, Simpl uses big data for credit underwriting, risk and fraud modelling, all without any paperwork, and enables Banks and Non-Bank Financial Companies to access a whole new consumer market.
In place of traditional forms of identification and authentication, Simpl integrates deeply into merchant apps via SDKs and APIs. This allows for more sophisticated forms of authentication that take full advantage of smartphone data and processing power

Skillset:
 Workflow manager/scheduler like Airflow, Luigi, Oozie
 Good handle on Python
 ETL Experience
 Batch processing frameworks like Spark, MR/PIG
 File formats: parquet, JSON, XML, thrift, avro, protobuff
 Rule engine (drools - business rule management system)
 Distributed file systems like HDFS, NFS, AWS, S3 and equivalent
 Built/configured dashboards

Nice to have:
 Data platform experience for eg: building data lakes, working with near - realtime
applications/frameworks like storm, flink, spark.
 AWS
 File encoding types: Thrift, Avro, Protobuff, Parquet, JSON, XML
 HIVE, HBASE
Read more
Amagi Media Labs
at Amagi Media Labs
3 recruiters
Rajesh C
Posted by Rajesh C
Chennai
15 - 18 yrs
Best in industry
Data architecture
Architecture
Data Architect
Architect
Java
+5 more
Job Title: Data Architect
Job Location: Chennai
Job Summary

The Engineering team is seeking a Data Architect. As a Data Architect, you will drive a
Data Architecture strategy across various Data Lake platforms. You will help develop
reference architecture and roadmaps to build highly available, scalable and distributed
data platforms using cloud based solutions to process high volume, high velocity and
wide variety of structured and unstructured data. This role is also responsible for driving
innovation, prototyping, and recommending solutions. Above all, you will influence how
users interact with Conde Nast’s industry-leading journalism.
Primary Responsibilities
Data Architect is responsible for
• Demonstrated technology and personal leadership experience in architecting,
designing, and building highly scalable solutions and products.
• Enterprise scale expertise in data management best practices such as data integration,
data security, data warehousing, metadata management and data quality.
• Extensive knowledge and experience in architecting modern data integration
frameworks, highly scalable distributed systems using open source and emerging data
architecture designs/patterns.
• Experience building external cloud (e.g. GCP, AWS) data applications and capabilities is
highly desirable.
• Expert ability to evaluate, prototype and recommend data solutions and vendor
technologies and platforms.
• Proven experience in relational, NoSQL, ELT/ETL technologies and in-memory
databases.
• Experience with DevOps, Continuous Integration and Continuous Delivery technologies
is desirable.
• This role requires 15+ years of data solution architecture, design and development
delivery experience.
• Solid experience in Agile methodologies (Kanban and SCRUM)
Required Skills
• Very Strong Experience in building Large Scale High Performance Data Platforms.
• Passionate about technology and delivering solutions for difficult and intricate
problems. Current on Relational Databases and No sql databases on cloud.
• Proven leadership skills, demonstrated ability to mentor, influence and partner with
cross teams to deliver scalable robust solutions..
• Mastery of relational database, NoSQL, ETL (such as Informatica, Datastage etc) /ELT
and data integration technologies.
• Experience in any one of Object Oriented Programming (Java, Scala, Python) and
Spark.
• Creative view of markets and technologies combined with a passion to create the
future.
• Knowledge on cloud based Distributed/Hybrid data-warehousing solutions and Data
Lake knowledge is mandate.
• Good understanding of emerging technologies and its applications.
• Understanding of code versioning tools such as GitHub, SVN, CVS etc.
• Understanding of Hadoop Architecture and Hive SQL
• Knowledge in any one of the workflow orchestration
• Understanding of Agile framework and delivery

Preferred Skills:
● Experience in AWS and EMR would be a plus
● Exposure in Workflow Orchestration like Airflow is a plus
● Exposure in any one of the NoSQL database would be a plus
● Experience in Databricks along with PySpark/Spark SQL would be a plus
● Experience with the Digital Media and Publishing domain would be a
plus
● Understanding of Digital web events, ad streams, context models
About Condé Nast
CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right
move to invest heavily in understanding this data and formed a whole new Data team
entirely dedicated to data processing, engineering, analytics, and visualization. This team
helps drive engagement, fuel process innovation, further content enrichment, and increase
market revenue. The Data team aimed to create a company culture where data was the
common language and facilitate an environment where insights shared in real-time could
improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The
team at Condé Nast Chennai works extensively with data to amplify its brands' digital
capabilities and boost online revenue. We are broadly divided into four groups, Data
Intelligence, Data Engineering, Data Science, and Operations (including Product and
Marketing Ops, Client Services) along with Data Strategy and monetization. The teams built
capabilities and products to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are
Condé Nast, and It Starts Here.
Read more
DataMetica
at DataMetica
1 video
7 recruiters
Sumangali Desai
Posted by Sumangali Desai
Pune, Hyderabad
7 - 12 yrs
₹7L - ₹20L / yr
Apache Spark
Big Data
Spark
Scala
Hadoop
+3 more
We at Datametica Solutions Private Limited are looking for Big Data Spark Lead who have a passion for cloud with knowledge of different on-premise and cloud Data implementation in the field of Big Data and Analytics including and not limiting to Teradata, Netezza, Exadata, Oracle, Cloudera, Hortonworks and alike.
Ideal candidates should have technical experience in migrations and the ability to help customers get value from Datametica's tools and accelerators.

Job Description
Experience : 7+ years
Location : Pune / Hyderabad
Skills :
  • Drive and participate in requirements gathering workshops, estimation discussions, design meetings and status review meetings
  • Participate and contribute in Solution Design and Solution Architecture for implementing Big Data Projects on-premise and on cloud
  • Technical Hands on experience in design, coding, development and managing Large Hadoop implementation
  • Proficient in SQL, Hive, PIG, Spark SQL, Shell Scripting, Kafka, Flume, Scoop with large Big Data and Data Warehousing projects with either Java, Python or Scala based Hadoop programming background
  • Proficient with various development methodologies like waterfall, agile/scrum and iterative
  • Good Interpersonal skills and excellent communication skills for US and UK based clients

About Us!
A global Leader in the Data Warehouse Migration and Modernization to the Cloud, we empower businesses by migrating their Data/Workload/ETL/Analytics to the Cloud by leveraging Automation.

We have expertise in transforming legacy Teradata, Oracle, Hadoop, Netezza, Vertica, Greenplum along with ETLs like Informatica, Datastage, AbInitio & others, to cloud-based data warehousing with other capabilities in data engineering, advanced analytics solutions, data management, data lake and cloud optimization.

Datametica is a key partner of the major cloud service providers - Google, Microsoft, Amazon, Snowflake.


We have our own products!
Eagle –
Data warehouse Assessment & Migration Planning Product
Raven –
Automated Workload Conversion Product
Pelican -
Automated Data Validation Product, which helps automate and accelerate data migration to the cloud.

Why join us!
Datametica is a place to innovate, bring new ideas to live and learn new things. We believe in building a culture of innovation, growth and belonging. Our people and their dedication over these years are the key factors in achieving our success.

Benefits we Provide!
Working with Highly Technical and Passionate, mission-driven people
Subsidized Meals & Snacks
Flexible Schedule
Approachable leadership
Access to various learning tools and programs
Pet Friendly
Certification Reimbursement Policy

Check out more about us on our website below!
www.datametica.com
Read more
Fragma Data Systems
at Fragma Data Systems
8 recruiters
Priyanka U
Posted by Priyanka U
Remote only
4 - 10 yrs
₹12L - ₹23L / yr
Informatica
ETL
Big Data
Spark
SQL
Skill:- informatica with big data management
 
1.Minimum 6 to 8 years of experience in informatica BDM development
2. Experience working on Spark/SQL
3. Develops informtica mapping/Sql 
4. Should have experience in Hadoop, spark etc

Work days- Sun-Thu
Day shift
 
 
 
Read more
Data Team
Agency job
via Oceanworld by Chandan J
Remote only
8 - 12 yrs
₹10L - ₹20L / yr
Big Data
Data engineering
Hadoop
data engineer
Apache Hive
+1 more
Senior Data Engineer (SDE)

(Hadoop, HDFS, Kafka, Spark, Hive)

Overall Experience - 8 to 12 years

Relevant exp on Big data - 3+ years in above

Salary: Max up-to 20LPA 

Job location - Chennai / Bangalore / 

Notice Period - Immediate joiner / 15-to-20-day Max 

The Responsibilities of The Senior Data Engineer Are:

- Requirements gathering and assessment

- Breakdown complexity and translate requirements to specification artifacts and story boards to build towards, using a test-driven approach

- Engineer scalable data pipelines using big data technologies including but not limited to Hadoop, HDFS, Kafka, HBase, Elastic

- Implement the pipelines using execution frameworks including but not limited to MapReduce, Spark, Hive, using Java/Scala/Python for application design.

- Mentoring juniors in a dynamic team setting

- Manage stakeholders with proactive communication upholding TheDataTeam's brand and values

A Candidate Must Have the Following Skills:

- Strong problem-solving ability

- Excellent software design and implementation ability

- Exposure and commitment to agile methodologies

- Detail oriented with willingness to proactively own software tasks as well as management tasks, and see them to completion with minimal guidance

- Minimum 8 years of experience

- Should have experience in full life-cycle of one big data application

- Strong understanding of various storage formats (ORC/Parquet/Avro)

- Should have hands on experience in one of the Hadoop distributions (Hortoworks/Cloudera/MapR)

- Experience in at least one cloud environment (GCP/AWS/Azure)

- Should be well versed with at least one database (MySQL/Oracle/MongoDB/Postgres)

- Bachelor's in Computer Science, and preferably, a Masters as well - Should have good code review and debugging skills

Additional skills (Good to have):

- Experience in Containerization (docker/Heroku)

- Exposure to microservices

- Exposure to DevOps practices - Experience in Performance tuning of big data applications
Read more
Japan Based Leading Company
Bengaluru (Bangalore)
3 - 10 yrs
₹0L - ₹20L / yr
Big Data
Amazon Web Services (AWS)
Java
Python
MySQL
+2 more
A data engineer with AWS Cloud infrastructure experience to join our Big Data Operations team. This role will provide advanced operations support, contribute to automation and system improvements, and work directly with enterprise customers to provide excellent customer service.
The candidate,
1. Must have a very good hands-on technical experience of 3+ years with JAVA or Python
2. Working experience and good understanding of AWS Cloud; Advanced experience with IAM policy and role management
3. Infrastructure Operations: 5+ years supporting systems infrastructure operations, upgrades, deployments using Terraform, and monitoring
4. Hadoop: Experience with Hadoop (Hive, Spark, Sqoop) and / or AWS EMR
5. Knowledge on PostgreSQL/MySQL/Dynamo DB backend operations
6. DevOps: Experience with DevOps automation - Orchestration/Configuration Management and CI/CD tools (Jenkins)
7. Version Control: Working experience with one or more version control platforms like GitHub or GitLab
8. Knowledge on AWS Quick sight reporting
9. Monitoring: Hands on experience with monitoring tools such as AWS CloudWatch, AWS CloudTrail, Datadog and Elastic Search
10. Networking: Working knowledge of TCP/IP networking, SMTP, HTTP, load-balancers (ELB) and high availability architecture
11. Security: Experience implementing role-based security, including AD integration, security policies, and auditing in a Linux/Hadoop/AWS environment. Familiar with penetration testing and scan tools for remediation of security vulnerabilities.
12. Demonstrated successful experience learning new technologies quickly
WHAT WILL BE THE ROLES AND RESPONSIBILITIES?
1. Create procedures/run books for operational and security aspects of AWS platform
2. Improve AWS infrastructure by developing and enhancing automation methods
3. Provide advanced business and engineering support services to end users
4. Lead other admins and platform engineers through design and implementation decisions to achieve balance between strategic design and tactical needs
5. Research and deploy new tools and frameworks to build a sustainable big data platform
6. Assist with creating programs for training and onboarding for new end users
7. Lead Agile/Kanban workflows and team process work
8. Troubleshoot issues to resolve problems
9. Provide status updates to Operations product owner and stakeholders
10. Track all details in the issue tracking system (JIRA)
11. Provide issue review and triage problems for new service/support requests
12. Use DevOps automation tools, including Jenkins build jobs
13. Fulfil any ad-hoc data or report request queries from different functional groups
Read more
Remote, Mumbai
10 - 18 yrs
₹30L - ₹55L / yr
Scala
Big Data
Java
Amazon Web Services (AWS)
ETL

What's the role?

Your role as a Principal Engineer will involve working with various team. As a principal engineer, will need full knowledge of the software development lifecycle and Agile methodologies. You will demonstrate multi-tasking skills under tight deadlines and constraints. You will regularly contribute to the development of work products (including analyzing, designing, programming, debugging, and documenting software) and may work with customers to resolve challenges and respond to suggestions for improvements and enhancements. You will setup the standard and principal for the product he/she drives.

  • Setup coding practice, guidelines & quality of the software delivered.
  • Determines operational feasibility by evaluating analysis, problem definition, requirements, solution development, and proposed solutions.
  • Documents and demonstrates solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code.
  • Prepares and installs solutions by determining and designing system specifications, standards, and programming.
  • Improves operations by conducting systems analysis; recommending changes in policies and procedures.
  • Updates job knowledge by studying state-of-the-art development tools, programming techniques, and computing equipment; participating in educational opportunities; reading professional publications; maintaining personal networks; participating in professional organizations.
  • Protects operations by keeping information confidential.
  • Develops software solutions by studying information needs; conferring with users; studying systems flow, data usage, and work processes; investigating problem areas; following the software development lifecycle. Who are you? You are a go-getter, with an eye for detail, strong problem-solving and debugging skills, and having a degree in BE/MCA/M.E./ M Tech degree or equivalent degree from reputed college/university.

 

Essential Skills / Experience:

  • 10+ years of engineering experience
  • Experience in designing and developing high volume web-services using API protocols and data formats
  • Proficient in API modelling languages and annotation
  • Proficient in Java programming
  • Experience with Scala programming
  • Experience with ETL systems
  • Experience with Agile methodologies
  • Experience with Cloud service & storage
  • Proficient in Unix/Linux operating systems
  • Excellent oral and written communication skills Preferred:
  • Functional programming languages (Scala, etc)
  • Scripting languages (bash, Perl, Python, etc)
  • Amazon Web Services (Redshift, ECS etc)
Read more
Yulu Bikes
at Yulu Bikes
1 video
3 recruiters
Keerthana k
Posted by Keerthana k
Bengaluru (Bangalore)
1 - 2 yrs
₹7L - ₹12L / yr
Data Science
Data Analytics
SQL
Python
Datawarehousing
+2 more
Skill Set 
SQL, Python, Numpy,Pandas,Knowledge of Hive and Data warehousing concept will be a plus point.

JD 

- Strong analytical skills with the ability to collect, organise, analyse and interpret trends or patterns in complex data sets and provide reports & visualisations.

- Work with management to prioritise business KPIs and information needs Locate and define new process improvement opportunities.

- Technical expertise with data models, database design and development, data mining and segmentation techniques

- Proven success in a collaborative, team-oriented environment

- Working experience with geospatial data will be a plus.
Read more
Chariot Tech
at Chariot Tech
1 recruiter
Raj Garg
Posted by Raj Garg
NCR (Delhi | Gurgaon | Noida)
1 - 5 yrs
₹15L - ₹16L / yr
Machine Learning (ML)
Big Data
Data Science
We are looking for a Machine Learning Developer who possesses apassion for machine technology & big data and will work with nextgeneration Universal IoT platform.Responsibilities:•Design and build machine that learns , predict and analyze data.•Build and enhance tools to mine data at scale• Enable the integration of Machine Learning models in Chariot IoTPlatform•Ensure the scalability of Machine Learning analytics across millionsof networked sensors•Work with other engineering teams to integrate our streaming,batch, or ad-hoc analysis algorithms into Chariot IoT's suite ofapplications•Develop generalizable APIs so other engineers can use our workwithout needing to be a machine learning expert
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos