Cutshort logo
Fernlink Technologies Pvt Ltd's logo

Java Developer

Jayaraj Esvar's profile picture
Posted by Jayaraj Esvar
2 - 3 yrs
₹2L - ₹-6L / yr
Bengaluru (Bangalore), Coimbatore
Skills
Spring
skill iconJava
Multithreading
Maven
JUnit
We are looking for good Java Developers, for our IOT platform. Those who are looking for a Change please drop in your profiles at www.fernlink.com
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Fernlink Technologies Pvt Ltd

Founded :
2015
Type
Size
Stage :
Bootstrapped
About
Founded in 2015, Fernlink Technologies Pvt Ltd is a bootstrapped company based in India. It has 1-5 employees currently and works in the domain of Internet of Things (IoT).
Read more
Connect with the team
Profile picture
Jayaraj Esvar
Company social profiles
linkedin

Similar jobs

Remote only
3 - 8 yrs
₹20L - ₹26L / yr
Airflow
Amazon Redshift
skill iconAmazon Web Services (AWS)
skill iconJava
ETL
+4 more
  • Experience with Cloud native Data tools/Services such as AWS Athena, AWS Glue, Redshift Spectrum, AWS EMR, AWS Aurora, Big Query, Big Table, S3, etc.

 

  • Strong programming skills in at least one of the following languages: Java, Scala, C++.

 

  • Familiarity with a scripting language like Python as well as Unix/Linux shells.

 

  • Comfortable with multiple AWS components including RDS, AWS Lambda, AWS Glue, AWS Athena, EMR. Equivalent tools in the GCP stack will also suffice.

 

  • Strong analytical skills and advanced SQL knowledge, indexing, query optimization techniques.

 

  • Experience implementing software around data processing, metadata management, and ETL pipeline tools like Airflow.

 

Experience with the following software/tools is highly desired:

 

  • Apache Spark, Kafka, Hive, etc.

 

  • SQL and NoSQL databases like MySQL, Postgres, DynamoDB.

 

  • Workflow management tools like Airflow.

 

  • AWS cloud services: RDS, AWS Lambda, AWS Glue, AWS Athena, EMR.

 

  • Familiarity with Spark programming paradigms (batch and stream-processing).

 

  • RESTful API services.
Read more
Cloudera
at Cloudera
2 recruiters
Sushmitha Rengarajan
Posted by Sushmitha Rengarajan
Bengaluru (Bangalore)
3 - 20 yrs
₹1L - ₹44L / yr
ETL
Informatica
Data Warehouse (DWH)
Relational Database (RDBMS)
Data Structures
+7 more

 

Cloudera Data Warehouse Hive team looking for a passionate senior developer to join our growing engineering team. This group is targeting the biggest enterprises wanting to utilize Cloudera’s services in a private and public cloud environment. Our product is built on open source technologies like Hive, Impala, Hadoop, Kudu, Spark and so many more providing unlimited learning opportunities.A Day in the LifeOver the past 10+ years, Cloudera has experienced tremendous growth making us the leading contributor to Big Data platforms and ecosystems and a leading provider for enterprise solutions based on Apache Hadoop. You will work with some of the best engineers in the industry who are tackling challenges that will continue to shape the Big Data revolution.  We foster an engaging, supportive, and productive work environment where you can do your best work. The team culture values engineering excellence, technical depth, grassroots innovation, teamwork, and collaboration.
You will manage product development for our CDP components, develop engineering tools and scalable services to enable efficient development, testing, and release operations.  You will be immersed in many exciting, cutting-edge technologies and projects, including collaboration with developers, testers, product, field engineers, and our external partners, both software and hardware vendors.Opportunity:Cloudera is a leader in the fast-growing big data platforms market. This is a rare chance to make a name for yourself in the industry and in the Open Source world. The candidate will responsible for Apache Hive and CDW projects. We are looking for a candidate who would like to work on these projects upstream and downstream. If you are curious about the project and code quality you can check the project and the code at the following link. You can start the development before you join. This is one of the beauties of the OSS world.Apache Hive

 

Responsibilities:

•Build robust and scalable data infrastructure software

•Design and create services and system architecture for your projects

•Improve code quality through writing unit tests, automation, and code reviews

•The candidate would write Java code and/or build several services in the Cloudera Data Warehouse.

•Worked with a team of engineers who reviewed each other's code/designs and held each other to an extremely high bar for the quality of code/designs

•The candidate has to understand the basics of Kubernetes.

•Build out the production and test infrastructure.

•Develop automation frameworks to reproduce issues and prevent regressions.

•Work closely with other developers providing services to our system.

•Help to analyze and to understand how customers use the product and improve it where necessary. 

Qualifications:

•Deep familiarity with Java programming language.

•Hands-on experience with distributed systems.

•Knowledge of database concepts, RDBMS internals.

•Knowledge of the Hadoop stack, containers, or Kubernetes is a strong plus. 

•Has experience working in a distributed team.

•Has 3+ years of experience in software development.

 

Read more
Virtusa
at Virtusa
2 recruiters
Priyanka Sathiyamoorthi
Posted by Priyanka Sathiyamoorthi
Chennai
11 - 15 yrs
₹15L - ₹33L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+3 more

We are looking for a Big Data Engineer with java for Chennai Location

Location : Chennai 

Exp : 11 to 15 Years 



Job description

Required Skill:

1. Candidate should have minimum 7 years of experience as total

2. Candidate should have minimum 4 years of experience in Big Data design and development

3. Candidate should have experience in Java, Spark, Hive & Hadoop, Python 

4. Candidate should have experience in any RDBMS.

Roles & Responsibility:

1. To create work plans, monitor and track the work schedule for on time delivery as per the defined quality standards.

2. To develop and guide the team members in enhancing their technical capabilities and increasing productivity.

3. To ensure process improvement and compliance in the assigned module, and participate in technical discussions or review.

4. To prepare and submit status reports for minimizing exposure and risks on the project or closure of escalation


Regards,

Priyanka S

7P8R9I9Y4A0N8K8A7S7

Read more
Games 24x7
Agency job
via zyoin by Shubha N
Bengaluru (Bangalore)
0 - 6 yrs
₹10L - ₹21L / yr
PowerBI
Big Data
Hadoop
Apache Hive
Business Intelligence (BI)
+5 more
Location: Bangalore
Work Timing: 5 Days A Week

Responsibilities include:

• Ensure right stakeholders gets right information at right time
• Requirement gathering with stakeholders to understand their data requirement
• Creating and deploying reports
• Participate actively in datamarts design discussions
• Work on both RDBMS as well as Big Data for designing BI Solutions
• Write code (queries/procedures) in SQL / Hive / Drill that is both functional and elegant,
following appropriate design patterns
• Design and plan BI solutions to automate regular reporting
• Debugging, monitoring and troubleshooting BI solutions
• Creating and deploying datamarts
• Writing relational and multidimensional database queries
• Integrate heterogeneous data sources into BI solutions
• Ensure Data Integrity of data flowing from heterogeneous data sources into BI solutions.

Minimum Job Qualifications:
• BE/B.Tech in Computer Science/IT from Top Colleges
• 1-5 years of experience in Datawarehousing and SQL
• Excellent Analytical Knowledge
• Excellent technical as well as communication skills
• Attention to even the smallest detail is mandatory
• Knowledge of SQL query writing and performance tuning
• Knowledge of Big Data technologies like Apache Hadoop, Apache Hive, Apache Drill
• Knowledge of fundamentals of Business Intelligence
• In-depth knowledge of RDBMS systems, Datawarehousing and Datamarts
• Smart, motivated and team oriented
Desirable Requirements
• Sound knowledge of software development in Programming (preferably Java )
• Knowledge of the software development lifecycle (SDLC) and models
Read more
BDI Plus Lab
at BDI Plus Lab
2 recruiters
Silita S
Posted by Silita S
Bengaluru (Bangalore)
3 - 7 yrs
₹5L - ₹12L / yr
Big Data
Hadoop
skill iconJava
skill iconPython
PySpark
+1 more

Roles and responsibilities:

 

  1. Responsible for development and maintenance of applications with technologies involving Enterprise Java and Distributed  technologies.
  2. Experience in Hadoop, Kafka, Spark, Elastic Search, SQL, Kibana, Python, experience w/ machine learning and Analytics     etc.
  3. Collaborate with developers, product manager, business analysts and business users in conceptualizing, estimating and developing new software applications and enhancements..
  4. Collaborate with QA team to define test cases, metrics, and resolve questions about test results.
  5. Assist in the design and implementation process for new products, research and create POC for possible solutions.
  6. Develop components based on business and/or application requirements
  7. Create unit tests in accordance with team policies & procedures
  8. Advise, and mentor team members in specialized technical areas as well as fulfill administrative duties as defined by support process
  9. Work with cross-functional teams during crisis to address and resolve complex incidents and problems in addition to assessment, analysis, and resolution of cross-functional issues. 
Read more
Centime
Agency job
via FlexAbility by srikanth voona
Hyderabad
8 - 14 yrs
₹15L - ₹35L / yr
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
skill iconDeep Learning
skill iconJava
skill iconPython

Required skill

  • Around 6- 8.5 years of experience and around 4+ years in AI / Machine learning space
  • Extensive experience in designing large scale machine learning solution for the ML use case,  large scale deployments and establishing continues automated improvement / retraining framework.
  • Strong experience in Python and Java is required.
  • Hands on experience on Scikit-learn, Pandas, NLTK
  • Experience in Handling of Timeseries data and associated techniques like Prophet, LSTM
  • Experience in Regression, Clustering, classification algorithms
  • Extensive experience in buildings traditional Machine Learning SVM, XGBoost, Decision tree and Deep Neural Network models like RNN, Feedforward is required.
  • Experience in AutoML like TPOT or other
  • Must have strong hands on experience in Deep learning frameworks like Keras, TensorFlow or PyTorch 
  • Knowledge of Capsule Network or reinforcement learning, SageMaker is a desirable skill
  • Understanding of Financial domain is desirable skill

 Responsibilities 

  • Design and implementation of solutions for ML Use cases
  • Productionize System and Maintain those
  • Lead and implement data acquisition process for ML work
  • Learn new methods and model quickly and utilize those in solving use cases
Read more
Japan Based Leading Company
Bengaluru (Bangalore)
3 - 10 yrs
₹0L - ₹20L / yr
Big Data
skill iconAmazon Web Services (AWS)
skill iconJava
skill iconPython
MySQL
+2 more
A data engineer with AWS Cloud infrastructure experience to join our Big Data Operations team. This role will provide advanced operations support, contribute to automation and system improvements, and work directly with enterprise customers to provide excellent customer service.
The candidate,
1. Must have a very good hands-on technical experience of 3+ years with JAVA or Python
2. Working experience and good understanding of AWS Cloud; Advanced experience with IAM policy and role management
3. Infrastructure Operations: 5+ years supporting systems infrastructure operations, upgrades, deployments using Terraform, and monitoring
4. Hadoop: Experience with Hadoop (Hive, Spark, Sqoop) and / or AWS EMR
5. Knowledge on PostgreSQL/MySQL/Dynamo DB backend operations
6. DevOps: Experience with DevOps automation - Orchestration/Configuration Management and CI/CD tools (Jenkins)
7. Version Control: Working experience with one or more version control platforms like GitHub or GitLab
8. Knowledge on AWS Quick sight reporting
9. Monitoring: Hands on experience with monitoring tools such as AWS CloudWatch, AWS CloudTrail, Datadog and Elastic Search
10. Networking: Working knowledge of TCP/IP networking, SMTP, HTTP, load-balancers (ELB) and high availability architecture
11. Security: Experience implementing role-based security, including AD integration, security policies, and auditing in a Linux/Hadoop/AWS environment. Familiar with penetration testing and scan tools for remediation of security vulnerabilities.
12. Demonstrated successful experience learning new technologies quickly
WHAT WILL BE THE ROLES AND RESPONSIBILITIES?
1. Create procedures/run books for operational and security aspects of AWS platform
2. Improve AWS infrastructure by developing and enhancing automation methods
3. Provide advanced business and engineering support services to end users
4. Lead other admins and platform engineers through design and implementation decisions to achieve balance between strategic design and tactical needs
5. Research and deploy new tools and frameworks to build a sustainable big data platform
6. Assist with creating programs for training and onboarding for new end users
7. Lead Agile/Kanban workflows and team process work
8. Troubleshoot issues to resolve problems
9. Provide status updates to Operations product owner and stakeholders
10. Track all details in the issue tracking system (JIRA)
11. Provide issue review and triage problems for new service/support requests
12. Use DevOps automation tools, including Jenkins build jobs
13. Fulfil any ad-hoc data or report request queries from different functional groups
Read more
Sameeksha Capital
Ahmedabad
1 - 2 yrs
₹1L - ₹3L / yr
skill iconJava
skill iconPython
Data Structures
Algorithms
skill iconC++
+1 more
Looking for Alternative Data Programmer for equity fund
The programmer should be proficient in python and should be able to work totally independently. Should also have skill to work with databases and have strong capability to understand how to fetch data from various sources, organise the data and identify useful information through efficient code.
Familiarity with Python 
Some examples of work: 
Text search on earnings transcripts for keywords to identify future trends.  
Integration of internal and external financial database
Web scraping to capture clean and organize data
Automatic updating of our financial models by importing data from machine readable formats such as XBRL 
Fetching data from public databases such as RBI, NSE, BSE, DGCA and process the same. 
Back-testing of data either to test historical cause and effect relation on market performance/portfolio performance, as well as back testing of our screener criteria in devising strategy
Read more
LimeTray
at LimeTray
5 recruiters
tanika monga
Posted by tanika monga
NCR (Delhi | Gurgaon | Noida)
4 - 6 yrs
₹15L - ₹18L / yr
skill iconMachine Learning (ML)
skill iconPython
Cassandra
MySQL
Apache Kafka
+2 more
Requirements: Minimum 4-years work experience in building, managing and maintaining Analytics applications B.Tech/BE in CS/IT from Tier 1/2 Institutes Strong Fundamentals of Data Structures and Algorithms Good analytical & problem-solving skills Strong hands-on experience in Python In depth Knowledge of queueing systems (Kafka/ActiveMQ/RabbitMQ) Experience in building Data pipelines & Real time Analytics Systems Experience in SQL (MYSQL) & NoSQL (Mongo/Cassandra) databases is a plus Understanding of Service Oriented Architecture Delivered high-quality work with a significant contribution Expert in git, unit tests, technical documentation and other development best practices Experience in Handling small teams
Read more
Bengaluru (Bangalore)
3 - 12 yrs
₹3L - ₹25L / yr
skill iconJava
skill iconPython
Spark
Hadoop
skill iconMongoDB
+3 more
We are a start-up in India seeking excellence in everything we do with an unwavering curiosity and enthusiasm. We build simplified new-age AI driven Big Data Analytics platform for Global Enterprises and solve their biggest business challenges. Our Engineers develop fresh intuitive solutions keeping the user in the center of everything. As a Cloud-ML Engineer, you will design and implement ML solutions for customer use cases and problem solve complex technical customer challenges. Expectations and Tasks - Total of 7+ years of experience with minimum of 2 years in Hadoop technologies like HDFS, Hive, MapReduce - Experience working with recommendation engines, data pipelines, or distributed machine learning and experience with data analytics and data visualization techniques and software. - Experience with core Data Science techniques such as regression, classification or clustering, and experience with deep learning frameworks - Experience in NLP, R and Python - Experience in performance tuning and optimization techniques to process big data from heterogeneous sources. - Ability to communicate clearly and concisely across technology and the business teams. - Excellent Problem solving and Technical troubleshooting skills. - Ability to handle multiple projects and prioritize tasks in a rapidly changing environment. Technical Skills Core Java, Multithreading, Collections, OOPS, Python, R, Apache Spark, MapReduce, Hive, HDFS, Hadoop, MongoDB, Scala We are a retained Search Firm employed by our client - Technology Start-up @ Bangalore. Interested candidates can share their resumes with me - [email protected]. I will respond to you within 24 hours. Online assessments and pre-employment screening are part of the selection process.
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos