Loading...

{{notif_text}}

The next CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, will happen on {{next_cs_event.startDate | date: 'd MMMM'}}The next CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, will begin in a few hoursThe CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, is LIVE.Join now!
{{hours_remaining}}:{{minutes_remaining}}:{{seconds_remaining}}Want to save 90% of your recruiting time? Learn how in our next webinar on 22nd March at 3 pmLearn more

Big Data Administrator
Posted by Arjun Ravindran

apply to this job

Locations

Bengaluru (Bangalore)

Experience

3 - 8 years

Salary

{{1400000 / ('' == 'MONTH' ? 12 : 100000) | number}} - {{2200000 / ('' == 'MONTH' ? 12 : 100000) | number}} {{'' == 'MONTH' ? '/mo' : 'lpa'}}

Skills

Big Data
Java
System Programming
DevOps
Amazon Web Services (AWS)
NOSQL Databases
Javascript

Job description

As a Big Data Administrator, you’ll be responsible for the administration and governance of a complex analytics platform that is already changing the way large industrial companies manage their assets. A Big Data Administrator understands cutting-edge tools and frameworks, and is able to determine what the best tools are for any given task. You will enable and work with our other developers to use cutting-edge technologies in the fields of distributed systems, data ingestion and mapping, and machine learning, to name a few. We also strongly encourage everyone to tinker with existing tools, and to stay up to date and test new technologies—all with the aim of ensuring that our existing systems don’t stagnate or deteriorate. Responsibilities: As a Big Data Engineer, your responsibilities may include, but are not limited to, the following: ● Build a scalable Big Data Platform designed to serve many different use-cases and requirements ● Build a highly scalable framework for ingesting, transforming and enhancing data at web scale ● Develop data structures and processes using components of the Hadoop ecosystem such as Avro, Hive, Parquet, Impala, Hbase, Kudu, Tez, etc. ● Establish automated build and deployment pipelines ● Implement machine learning models that enable customers to glean hidden insights about their data ● Implementing security and integrating with components such as LDAP, AD, Sentry, Kerberos. ● Strong understanding of row level and role based security concepts such as inheritance ● Establishing scalability benchmarks for predictable scalability thresholds. Qualifications: ● Bachelor's degree in Computer Science or related field ● 6+ years of system building experience ● 4+ years of programming experience using JVM based languages ● A passion for DevOps and an appreciation for continuous integration/deployment ● A passion for QA and an understanding that testing is not someone else’s responsibility ● Experience automating infrastructure and build processes ● Outstanding programming and problem solving skills ● Strong passion for technology and building great systems ● Excellent communication skills and ability to work using Agile methodologies ● Ability to work quickly and collaboratively in a fast-paced, entrepreneurial environment ● Experience with service-oriented (SOA) and event-driven (EDA) architectures ● Experience using big data solutions in an AWS environment ● Experience with noSQL data stores: Cassandra, HDFS and/or Elasticsearch ● Experience with javascript or associated frameworks Preferred skills: We value these qualities, but they’re not required for this role: ● Masters or Ph.D. in related field ● Experience as an open source contributor ● Experience with Akka, stream processing technologies and concurrency frameworks ● Experience with Data modeling ● Experience with Chef, Puppet, Ansible, Salt or equivalent ● Experience with Docker, Mesos and Marathon ● Experience with distributed messaging services, preferably Kafka ● Experience with distributed data processors, preferably Spark ● Experience with Angular, React, Redux, Immutable.js, Rx.js, Node.js or equivalent ● Experience with Reactive and/or Functional programming ● Understanding of Thrift, Avro or protocol buffers

About Microland Limited

Microland delivers accelerated digital business technology solutions & services that enhance your business outcomes, workplace productivity and deliver competitive advantage with increased efficiencies. Visit us to accelerate your digital transformation journey.

Founded

1989

Type

Services

Size

250+ employees

Stage

Profitable
View company

Similar jobs

Data Engineer

Founded 2010
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
1 - 5 years
Experience icon
Best in industry5 - 15 lacs/annum

Job Description In this role you will help us build, improve and maintain our huge data infrastructure where we collect TB's of logs daily. Data driven decisioning is crucial to the success of our customers and this role is central to ensuring we have a cutting edge data infrastructure to do things faster, better, and cheaper! Experience 1 - 3 Years Required Skills -Must be a polyglot with good command over Java, Scala and a scripting language -A non trivial project experience in distributed computing frameworks like Apache Spark/Hadoop/Pig/Kafka/Storm with sound knowledge of their internals -Expert knowledge of relational databases like MYSQL, and in-memory data stores like Redis -Regular participation in coding/hacking contests like Top-Coder, Code-Jam and Hacker-Cup is a huge plus Pre requisites -Strong analytical skills and solid foundation in Computer Science fundamentals specially in -DataStructures/Algorithms, Object Oriented principles, Operating Systems, Computer Networks -Ability and willingness to take ownership and work under minimum supervision, independently or as a part of a team -Passion for innovation and "Never Say Die" attitude -Strong verbal and written communication skills Education BTech/M.Tech/MS/Dual in Computer Science with above average academic credentials

Job posted by
apply for job
apply for job
Sachin Bhatevara picture
Sachin Bhatevara
Job posted by
Sachin Bhatevara picture
Sachin Bhatevara
Apply for job
apply for job

Intern - Big Data Engineering

Founded 2011
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bangalore
Experience icon
0 - 1 years
Experience icon
Best in industry2 - 4 lacs/annum

About Us DataWeave is a Data Platform which aggregates publicly available data from disparate sources and makes it available in the right format to enable companies take strategic decisions using trans-firewall Analytics. It's hard to tell what we love more, problems or solutions! Every day, we choose to address some of the hardest data problems that there are. We are in the business of making sense of messy public data on the web. At serious scale! Read more on Become a DataWeaver Skills and Requirements: ● Good communication and collaboration skills. ● Ability to code and script with strong grasp of CS fundamentals, excellent problem-solving abilities. ● Comfortable with at least one coding language, Python would be a plus. ● Good understanding of RDBMS ● Experience in building Data pipelines and processing large datasets is a plus. ● Knowledge of building crawlers is a plus. ● Working knowledge of open source tools such as mysql, Solr, ElasticSearch, Cassandra would be a plus. · Growth at DataWeave ● Based on the performance, permanent employment will be offered between 3-6 months in to the Internship. ● You have the opportunity to work in many different areas and explore wide variety of tools to figure out what really excites you. ● Competitive Salary Packages.

Job posted by
apply for job
apply for job
Sadananda Vaidya picture
Sadananda Vaidya
Job posted by
Sadananda Vaidya picture
Sadananda Vaidya
Apply for job
apply for job

Sr. Big Data Engineer

Founded 2011
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 7 years
Experience icon
Best in industry8 - 18 lacs/annum

Roles and Responsibilities: ●        Inclined towards working in a start-up like environment. ●        Comfort with frequent, incremental code testing and deployment, Data management skills ●        Design and Build robust and scalable data engineering solutions for structured and unstructured data for delivering business insights, reporting and analytics. ●        Expertise in troubleshooting, debugging, data completeness and quality issues and scaling overall system performance. ●        Build robust API’s that powers our delivery points (Dashboards, Visualizations and other integrations). Skills and Requirements: ●        Good communication and collaboration skills with 3-7 years of experience. ●        Ability to code and script with strong grasp of CS fundamentals, excellent problem solving abilities. ●        Comfort with frequent, incremental code testing and deployment, Data management skills ●        Good understanding of RDBMS ●        Experience in building Data pipelines and processing large datasets. ●        Knowledge of building crawlers and data mining is a plus. ●        Working knowledge of open source tools such as mysql, Solr, ElasticSearch, Cassandra (data stores) would be a plus.

Job posted by
apply for job
apply for job
Sadananda Vaidya picture
Sadananda Vaidya
Job posted by
Sadananda Vaidya picture
Sadananda Vaidya
Apply for job
apply for job

Data Science Engineer (SDE I)

Founded 2017
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 3 years
Experience icon
Best in industry12 - 20 lacs/annum

Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. For this role, credible display of innovation in past projects (or academia) is a must. We are looking for a candidate who lives and talks Data & Algorithms, love to play with BigData engineering, hands-on with Apache Spark, Kafka, RDBMS/NoSQL DBs, Big Data Analytics and handling Unix & Production Server. Tier-1 college (BE from IITs, BITS-Pilani, top NITs, IIITs or MS in Stanford, Berkley, CMU, UW–Madison) or exceptionally bright work history is a must. Let us know if this interests you to explore the profile further.

Job posted by
apply for job
apply for job
Shobhit Agarwal picture
Shobhit Agarwal
Job posted by
Shobhit Agarwal picture
Shobhit Agarwal
Apply for job
apply for job

Software Developer

via IQVIA
Founded 1969
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore), Kochi (Cochin)
Experience icon
2 - 7 years
Experience icon
Best in industry5 - 25 lacs/annum

Job Skill Requirements: • 4+ years of experience building and managing complex products/solutions • 2+ experience in DW/ELT/ETL technologies-Nice to have • 3+ years of hands on development experience using Big Data Technologies like: Hadoop, SPARK • 3+ years of hands on development experience using Big Data eco system components like: Hive, Impala,HBase, Sqoop, Oozie etc… • Proficient level programming in Scala. • Good to have hands on experience building webservices in Python/Scala stack. • Good to have experience developing Restful web services • Knowledge of web technologies and protocols (NoSQL/JSON/REST/JMS)

Job posted by
apply for job
apply for job
Ambili Sasidharan picture
Ambili Sasidharan
Job posted by
Ambili Sasidharan picture
Ambili Sasidharan
Apply for job
apply for job

Data Engineer

Founded 2015
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Mumbai
Experience icon
1 - 5 years
Experience icon
Best in industry7 - 12 lacs/annum

JOB DESCRIPTION: We are looking for a Data Engineer with a solid background in scalable systems to work with our engineering team to improve and optimize our platform. You will have significant input into the team’s architectural approach and execution. We are looking for a hands-on programmer who enjoys designing and optimizing data pipelines for large-scale data. This is NOT a "data scientist" role, so please don't apply if you're looking for that. RESPONSIBILITIES: 1. Build, maintain and test, performant, scalable data pipelines 2. Work with data scientists and application developers to implement scalable pipelines for data ingest, processing, machine learning and visualization 3. Building interfaces for ingest across various data stores MUST-HAVE: 1. A track record of building and deploying data pipelines as a part of work or side projects 2. Ability to work with RDBMS, MySQL or Postgres 3. Ability to deploy over cloud infrastructure, at least AWS 4. Demonstrated ability and hunger to learn GOOD-TO-HAVE: 1. Computer Science degree 2. Expertise in at least one of: Python, Java, Scala 3. Expertise and experience in deploying solutions based on Spark and Kafka 4. Knowledge of container systems like Docker or Kubernetes 5. Experience with NoSQL / graph databases: 6. Knowledge of Machine Learning Kindly apply only if you are skilled in building data pipelines.

Job posted by
apply for job
apply for job
Zeimona Dsouza picture
Zeimona Dsouza
Job posted by
Zeimona Dsouza picture
Zeimona Dsouza
Apply for job
apply for job

Data ETL Engineer

Founded 2013
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Chennai
Experience icon
1 - 3 years
Experience icon
Best in industry5 - 12 lacs/annum

Responsibilities: Design and develop ETL Framework and Data Pipelines in Python 3. Orchestrate complex data flows from various data sources (like RDBMS, REST API, etc) to the data warehouse and vice versa. Develop app modules (in Django) for enhanced ETL monitoring. Device technical strategies for making data seamlessly available to BI and Data Sciences teams. Collaborate with engineering, marketing, sales, and finance teams across the organization and help Chargebee develop complete data solutions. Serve as a subject-matter expert for available data elements and analytic capabilities. Qualification: Expert programming skills with the ability to write clean and well-designed code. Expertise in Python, with knowledge of at least one Python web framework. Strong SQL Knowledge, and high proficiency in writing advanced SQLs. Hands on experience in modeling relational databases. Experience integrating with third-party platforms is an added advantage. Genuine curiosity, proven problem-solving ability, and a passion for programming and data.

Job posted by
apply for job
apply for job
Vinothini Sundaram picture
Vinothini Sundaram
Job posted by
Vinothini Sundaram picture
Vinothini Sundaram
Apply for job
apply for job

Big Data Developer

Founded 2011
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Chennai
Experience icon
1 - 5 years
Experience icon
Best in industry1 - 6 lacs/annum

• Looking for Big Data Engineer with 3+ years of experience. • Hands-on experience with MapReduce-based platforms, like Pig, Spark, Shark. • Hands-on experience with data pipeline tools like Kafka, Storm, Spark Streaming. • Store and query data with Sqoop, Hive, MySQL, HBase, Cassandra, MongoDB, Drill, Phoenix, and Presto. • Hands-on experience in managing Big Data on a cluster with HDFS and MapReduce. • Handle streaming data in real time with Kafka, Flume, Spark Streaming, Flink, and Storm. • Experience with Azure cloud, Cognitive Services, Databricks is preferred.

Job posted by
apply for job
apply for job
John Richardson picture
John Richardson
Job posted by
John Richardson picture
John Richardson
Apply for job
apply for job

Big Data Engineer

Founded 2012
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Mumbai
Experience icon
2 - 7 years
Experience icon
Best in industry4 - 20 lacs/annum

As a Big Data Engineer, you will build utilities that would help orchestrate migration of massive Hadoop/Big Data systems onto public cloud systems. You would build data processing scripts and pipelines that serve several of jobs and queries per day. The services you build will integrate directly with cloud services, opening the door to new and cutting-edge re-usable solutions. You will work with engineering teams, co-workers, and customers to gain new insights and dream of new possibilities. The Big Data Engineering team is hiring in the following areas: • Distributed storage and compute solutions • Data ingestion, consolidation, and warehousing • Cloud migrations and replication pipelines • Hybrid on-premise and in-cloud Big Data solutions • Big Data, Hadoop and spark processing Basic Requirements: • 2+ years’ experience of Hands-on in data structures, distributed systems, Hadoop and spark, SQL and NoSQL Databases • Strong software development skills in at least one of: Java, C/C++, Python or Scala. • Experience building and deploying cloud-based solutions at scale. • Experience in developing Big Data solutions (migration, storage, processing) • BS, MS or PhD degree in Computer Science or Engineering, and 5+ years of relevant work experience in Big Data and cloud systems. • Experience building and supporting large-scale systems in a production environment. Technology Stack: Cloud Platforms – AWS, GCP or Azure Big Data Distributions – Any of Apache Hadoop/CDH/HDP/EMR/Google DataProc/HD-Insights Distributed processing Frameworks – One or more of MapReduce, Apache Spark, Apache Storm, Apache Flink. Database/warehouse – Hive, HBase, and at least one cloud-native services Orchestration Frameworks – Any of Airflow, Oozie, Apache NiFi, Google DataFlow Message/Event Solutions – Any of Kafka, Kinesis, Cloud pub-sub Container Orchestration (Good to have)– Kubernetes or Swarm

Job posted by
apply for job
apply for job
Anwar Shaikh picture
Anwar Shaikh
Job posted by
Anwar Shaikh picture
Anwar Shaikh
Apply for job
apply for job

Senior Software Engineer

Founded 2013
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
NCR (Delhi | Gurgaon | Noida)
Experience icon
4 - 6 years
Experience icon
Best in industry15 - 18 lacs/annum

Requirements: Minimum 4-years work experience in building, managing and maintaining Analytics applications B.Tech/BE in CS/IT from Tier 1/2 Institutes Strong Fundamentals of Data Structures and Algorithms Good analytical & problem-solving skills Strong hands-on experience in Python In depth Knowledge of queueing systems (Kafka/ActiveMQ/RabbitMQ) Experience in building Data pipelines & Real time Analytics Systems Experience in SQL (MYSQL) & NoSQL (Mongo/Cassandra) databases is a plus Understanding of Service Oriented Architecture Delivered high-quality work with a significant contribution Expert in git, unit tests, technical documentation and other development best practices Experience in Handling small teams

Job posted by
apply for job
apply for job
tanika monga picture
tanika monga
Job posted by
tanika monga picture
tanika monga
Apply for job
apply for job
Want to apply for this role at Microland Limited?
Hiring team responds within a day
apply for this job
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No 3rd party recruiters. No spam.