Loading...

{{notif_text}}

Work at top Indian companies and global startups in 2020 - Check it out

Hadoop Jobs in Bangalore (Bengaluru)

Explore top Hadoop Job opportunities in Bangalore (Bengaluru) for Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.

Senior Software Engineer
Senior Software Engineer

Founded 2011
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
7 - 14 years
Experience icon
Best in industry30 - 80 lacs/annum

We are looking for a Senior Software Engineer with 7+ years’ experience programming with Java or similar language. If you are passionate about writing code that will run on millions of devices, creating a rock-solid technology that will save lives and you will not give up until the code is best in class, then Zendrive is the place for you. Responsibilities: - -Understanding and analysing project requirements and translating it into specifications and programming deliverables. - Working closely with analysts, designers and clients to enhance existing applications as well as build new applications. - Build scalable services, big data pipelines and core infrastructure for the Zendrive data platform. - Testing and debugging the product in controlled, real situations.- Maintaining the systems and updating as per requirements. Competencies and Skills Required: - Expertise in at least one programming language & tech stack; JAVA, C++, Python, Javascript etc - Excellent knowledge of application of concepts for coding - Strong experience in building complex, scalable solutions, and experience in leading large-scale big data & analytics projects. - Strong experience in working with cloud based infrastructure and experience like AWS services. - Strong understanding of database and analytical technologies including MPP and NoSQL databases, Data Warehouse design, BI reporting, and Dashboard development. - Experience working with Hadoop, Spark, Cassandra, HBase, Elastic Search, Relational DBs. - Ability to deep dive into problem-solving and build elegant, maintainable solutions to complex problems - Excellent communication & interpersonal skills

Job posted by
apply for job
apply for job
Twinkle Bhatnagar picture
Twinkle Bhatnagar
Job posted by
Twinkle Bhatnagar picture
Twinkle Bhatnagar
Apply for job
apply for job

Data Engineer
Data Engineer

via Xpheno
Founded 2017
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 7 years
Experience icon
Best in industry6 - 13 lacs/annum

A thriving data engineer who has hands on expertise in Bigdata (Hadoop), basics of Spark, Python/Scala, SQL and has handled a sizeable amount of data, with good communication and softskills.

Job posted by
apply for job
apply for job
Varsha B picture
Varsha B
Job posted by
Varsha B picture
Varsha B
Apply for job
apply for job

Senior Data Engineer
Senior Data Engineer

Founded 2018
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 7 years
Experience icon
Best in industry8 - 18 lacs/annum

We are hiring for Senior Data Engineer for BengaluruResponsibilities and Duties 5-8 years of experience building complex software programs and applications for acquisition, processing, and management of massive quantities of data (big data) using high-level programming languages (e.g., JAVA, C++, Python) Expertise in Eloqua Marketing Cloud Campaign Management, Emails, Landing Pages, Programs, CDO and End-to-End Integration of external data into Eloqua. Lead end-to-end efforts to design, develop, and implement data movement and integration processes in preparation for analysis, data warehousing, or operational data stores. Collect, process and interpret large data sets, and identify and extract features of interest using methods such as aggregation and filtering Develop and implement algorithms for data processing and manipulation tasks (e.g., cleaning, parsing, sorting, ranking) Exercise judgment in selecting methods and techniques to design, develop, and implement software tools and processes to extract, transfer, and load raw data or pre-processed data into relational and NoSQL databases or data warehouses Troubleshoot, optimize, and tune performance of ETL processes and analytics queries Assist in data model documentation, data dictionary, data flow, and data mapping for end users Create new metrics and develop tools for monitoring and reporting Participate in complete end-to-end data engineering project work, including design, reviews, development, unit test, and deployment Expertise with SQL - Create and develop complex queries in Hive using SQL Extensive working experience with Hive and Vertica. Intermediate knowledge of AWS Qualifications and Skills Advanced Python Skills Data Engineering ETL and ELT Skills Expertise on Streaming data Experience in Hadoop eco system AWS Skills (S3, Athena, lambda) or any Cloud Platform.   Interested to apply? Please revert with updated resume & below details Total Experience in IT Relevant Experience Current CTC Expected CTC Notice Period Current Location

Job posted by
apply for job
apply for job
Deepali Choudhary picture
Deepali Choudhary
Job posted by
Deepali Choudhary picture
Deepali Choudhary
Apply for job
apply for job

Data Engineer
Data Engineer

Founded 1993
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Anywhere
Experience icon
3 - 10 years
Experience icon
Best in industry10 - 32 lacs/annum

Data Engineering role at ThoughtWorks   ThoughtWorks India is looking for talented data engineers passionate about building large scale data processing systems to help manage the ever-growing information needs of our clients. Our developers have been contributing code to major organizations and open source projects for over 25 years now. They’ve also been writing books, speaking at conferences, and helping push software development forward -- changing companies and even industries along the way. As Consultants, we work with our clients to ensure we’re delivering the best possible solution. Our Lead Dev plays an important role in leading these projects to success.   You will be responsible for - Creating complex data processing pipelines, as part of diverse, high energy teams Designing scalable implementations of the models developed by our Data Scientists Hands-on programming based on TDD, usually in a pair programming environment Deploying data pipelines in production based on Continuous Delivery practices   Ideally, you should have -  2-6 years of overall industry experience Minimum of 2 years of experience building and deploying large scale data processing pipelines in a production environment Strong domain modelling and coding experience in Java /Scala / Python. Experience building data pipelines and data centric applications using distributed storage platforms like HDFS, S3, NoSql databases (Hbase, Cassandra, etc) and distributed processing platforms like Hadoop, Spark, Hive, Oozie, Airflow, Kafka etc in a production setting Hands on experience in (at least one or more) MapR, Cloudera, Hortonworks and/or Cloud (AWS EMR, Azure HDInsights, Qubole etc.) Knowledge of software best practices like Test-Driven Development (TDD) and Continuous Integration (CI), Agile development Strong communication skills with the ability to work in a consulting environment is essential   And here’s some of the perks of being part of a unique organization like ThoughtWorks: A real commitment to “changing the face of IT” -- our way of thinking about diversity and inclusion. Over the past ten years, we’ve implemented a lot of initiatives to make ThoughtWorks a place that reflects the world around us, and to make this a welcoming home to technologists of all stripes. We’re not perfect, but we’re actively working towards true gender balance for our business and our industry, and you’ll see that diversity reflected on our project teams and in offices. Continuous learning. You’ll be constantly exposed to new languages, frameworks and ideas from your peers and as you work on different projects -- challenging you to stay at the top of your game. Support to grow as a technologist outside of your role at ThoughtWorks. This is why ThoughtWorkers have written over 100 books and can be found speaking at (and, ahem, keynoting) tech conferences all over the world. We love to learn and share knowledge, and you’ll find a community of passionate technologists eager to back your endeavors, whatever they may be. You’ll also receive financial support to attend conferences every year. An organizational commitment to social responsibility. ThoughtWorkers challenge each other to be just a little more thoughtful about the world around us, and we believe in using our profits for good. All around the world, you’ll find ThoughtWorks supporting great causes and organizations in both official and unofficial capacities.   If you relish the idea of being part of ThoughtWorks’ Data Practice that extends beyond the work we do for our customers, you may find ThoughtWorks is the right place for you. If you share our passion for technology and want to help change the world with software, we want to hear from you!

Job posted by
apply for job
apply for job
Suresh Teegireddy picture
Suresh Teegireddy
Job posted by
Suresh Teegireddy picture
Suresh Teegireddy
Apply for job
apply for job

Data Engineer
Data Engineer

Founded 2017
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 5 years
Experience icon
Best in industry15 - 28 lacs/annum

Job Description We are looking for a Data Engineer that will be responsible for collecting, storing, processing, and analyzing huge sets of data that is coming from different sources. Responsibilities Working with Big Data tools and frameworks to provide requested capabilities Identify development needs in order to improve and streamline operations Develop and manage BI solutions Implementing ETL process and Data Warehousing Monitoring performance and managing infrastructure Skills Proficient understanding of distributed computing principles Proficiency with Hadoop and Spark Experience with building stream-processing systems, using solutions such as Kafka and Spark-Streaming Good knowledge of Data querying tools SQL and Hive Knowledge of various ETL techniques and frameworks Experience with Python/Java/Scala (at least one) Experience with cloud services such as AWS or GCP Experience with NoSQL databases, such as DynamoDB,MongoDB will be an advantage Excellent written and verbal communication skills

Job posted by
apply for job
apply for job
Keerthana k picture
Keerthana k
Job posted by
Keerthana k picture
Keerthana k
Apply for job
apply for job

Data Engineer
Data Engineer

Founded
Products and services{{j_company_types[ - 1]}}
{{j_company_sizes[ - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 5 years
Experience icon
Best in industry7 - 13 lacs/annum

Qualifications for Big Data Engineer: We are looking for a candidate with 2+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Information Systems or another quantitative field. They should also have experience using the following software/tools:  Experience with big data tools: Hadoop, Spark, Kafka, Hive etc.  Experience with relational SQL and NoSQL databases.  Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.  Experience with AWS cloud services: EC2, EMR, RDS, Redshift  Experience with stream-processing systems: Storm, Spark-Streaming, etc.  Experience with object-oriented/object function scripting languages: Java, Scala, python etc.

Job posted by
apply for job
apply for job
Biswajit Sahu picture
Biswajit Sahu
Job posted by
Biswajit Sahu picture
Biswajit Sahu
Apply for job
apply for job

ETL Talend developer
ETL Talend developer

Founded 2011
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 19 years
Experience icon
Best in industry10 - 30 lacs/annum

Strong exposure in ETL / Big Data / Talend / Hadoop / Spark / Hive / Pig To be considered as a candidate for a Senior Data Engineer position, a person must have a proven track record of architecting data solutions on current and advanced technical platforms. They must have leadership abilities to lead a team providing data centric solutions with best practices and modern technologies in mind. They look to build collaborative relationships across all levels of the business and the IT organization. They possess analytic and problem-solving skills and have the ability to research and provide appropriate guidance for synthesizing complex information and extract business value. Have the intellectual curiosity and ability to deliver solutions with creativity and quality. Effectively work with business and customers to obtain business value for the requested work. Able to communicate technical results to both technical and non-technical users using effective story telling techniques and visualizations. Demonstrated ability to perform high quality work with innovation both independently and collaboratively.

Job posted by
apply for job
apply for job
Shobha B K picture
Shobha B K
Job posted by
Shobha B K picture
Shobha B K
Apply for job
apply for job

Data Engineer
Data Engineer

Founded 2004
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 5 years
Experience icon
Best in industry5 - 8 lacs/annum

Who we are? Searce is a Cloud, Automation & Analytics led business transformation company focussed on helping futurify businesses. We help our clients become successful by helping reimagine ‘what's next’ and then enabling them to realize that ‘now’. We processify, saasify, innovify & futurify businesses by leveraging Cloud | Analytics | Automation | BPM. What we believe? Best practices are overrated Implementing best practices can only make one ‘average’. Honesty and Transparency We believe in naked truth. We do what we tell and tell what we do. Client Partnership Client - Vendor relationship: No. We partner with clients instead.  And our sales team comprises of 100% of our clients. How we work? It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER. Humble: Happy people don’t carry ego around. We listen to understand; not to respond. Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about. Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it. Passionate: We are as passionate about the great vada-pao vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver. Innovative: Innovate or Die. We love to challenge the status quo. Experimental: We encourage curiosity & making mistakes. Responsible: Driven. Self-motivated. Self-governing teams. We own it. We welcome *really unconventional* creative thinkers who can work in an agile, flexible environment. We are a flat organization with unlimited growth opportunities, and small team sizes – wherein flexibility is a must, mistakes are encouraged, creativity is rewarded, and excitement is required. Introduction When was the last time you thought about rebuilding your smartphone charger using solar panels on your backpack OR changed the sequencing of switches in your bedroom (on your own, of course) to make it more meaningful OR pointed out an engineering flaw in the sequencing of traffic signal lights to a fellow passenger, while he gave you a blank look? If the last time this happened was more than 6 months ago, you are a dinosaur for our needs. If it was less than 6 months ago, did you act on it? If yes, then let’s talk. We are quite keen to meet you if: You eat, dream, sleep and play with Cloud Data Store & engineering your processes on cloud architecture You have an insatiable thirst for exploring improvements, optimizing processes, and motivating people. You like experimenting, taking risks and thinking big. 3 things this position is NOT about: This is NOT just a job; this is a passionate hobby for the right kind. This is NOT a boxed position. You will code, clean, test, build and recruit and you will feel that this is not really ‘work’. This is NOT a position for people who like to spend time on talking more than the time they spend doing. 3 things this position IS about: Attention to detail matters. Roles, titles, the ego does not matter; getting things done matters; getting things done quicker and better matters the most. Are you passionate about learning new domains & architecting solutions that could save a company millions of dollars? Roles and Responsibilities Drive and define database design and development of real-time complex products. Strive for excellence in customer experience, technology, methodology, and execution. Define and own end-to-end Architecture from definition phase to go-live phase. Define reusable components/frameworks, common schemas, standards to be used & tools to be used and help bootstrap the engineering team. Performance tuning of application and database and code optimizations.  Define database strategy, database design & development standards and SDLC, database customization & extension patterns, database deployment and upgrade methods, database integration patterns, and data governance policies. Architect and develop database schema, indexing strategies, views, and stored procedures for Cloud applications. Assist in defining scope and sizing of work; analyze and derive NFRs, participate in proof of concept development. Contribute to innovation and continuous enhancement of the platform. Define and implement a strategy for data services to be used by Cloud and web-based applications. Improve the performance, availability, and scalability of the physical database, including database access layer, database calls, and SQL statements. Design robust cloud management implementations including orchestration and catalog capabilities. Architect and design distributed data processing solutions using big data technologies - added advantage. Demonstrate thought leadership in cloud computing across multiple channels and become a trusted advisor to decision-makers. Desired Skills Experience with Data Warehouse design, ETL (Extraction, Transformation & Load), architecting efficient software designs for DW platform. Hands-on experience in Big Data space (Hadoop Stack like M/R, HDFS, Pig, Hive, HBase, Flume, Sqoop, etc. Knowledge of NoSQL stores is a plus). Knowledge of other transactional Database Management Systems/Open database system and NoSQL database (MongoDB, Cassandra, Hbase etc.) is a plus. Good knowledge of data management principles like Data Architecture, Data Governance, Very Large Database Design (VLDB), Distributed Database Design, Data Replication, and High Availability. Must have experience in designing large-scale, highly available, fault-tolerant OLTP data management systems. Solid knowledge of any one of the industry-leading RDBMS like Oracle/SQL Server/DB2/MySQL etc. Expertise in providing data architecture solutions and recommendations that are technology-neutral. Experience in Architecture consulting engagements is a plus. Deep understanding of technical and functional designs for Databases, Data Warehousing, Reporting, and Data Mining areas. Education & Experience Bachelors in Engineering or Computer Science (preferably from a premier School) - Advanced degree in Engineering, Mathematics, Computer or Information Technology. Highly analytical aptitude and a strong ‘desire to deliver’ outlives those fancy degrees! More so if you have been a techie from 12. 2-5 years of experience in database design & development 0- Years experience  of AWS or Google Cloud Platform or Hadoop experience Experience working in a hands-on, fast-paced, creative entrepreneurial environment in a cross-functional capacity.

Job posted by
apply for job
apply for job
Vishal Jarsania picture
Vishal Jarsania
Job posted by
Vishal Jarsania picture
Vishal Jarsania
Apply for job
apply for job

Data Engineer
Data Engineer
at Rely

via Rely
Founded 2018
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 10 years
Experience icon
Best in industry8 - 35 lacs/annum

Intro Our data and risk team is the core pillar of our business that harnesses alternative data sources to guide the decisions we make at Rely. The team designs, architects, as well as develop and maintain a scalable data platform the powers our machine learning models. Be part of a team that will help millions of consumers across Asia, to be effortlessly in control of their spending and make better decisions. What will you doThe data engineer is focused on making data correct and accessible, and building scalable systems to access/process it. Another major responsibility is helping AI/ML Engineers write better code.• Optimize and automate ingestion processes for a variety of data sources such as: click stream, transactional and many other sources. Create and maintain optimal data pipeline architecture and ETL processes Assemble large, complex data sets that meet functional / non-functional business requirements. Develop data pipeline and infrastructure to support real-time decisions Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data' technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs. What will you need• 2+ hands-on experience building and implementation of large scale production pipeline and Data Warehouse• Experience dealing with large scale Proficiency in writing and debugging complex SQLs Experience working with AWS big data tools• Ability to lead the project and implement best data practises and technology Data Pipelining Strong command in building & optimizing data pipelines, architectures and data sets Strong command on relational SQL & noSQL databases including Postgres Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. Big Data: Strong experience in big data tools & applications Tools: Hadoop, Spark, HDFS etc AWS cloud services: EC2, EMR, RDS, Redshift Stream-processing systems: Storm, Spark-Streaming, Flink etc. Message queuing: RabbitMQ, Spark etc Software Development & Debugging Strong experience in object-oriented programming/object function scripting languages: Python, Java, C++, Scala, etc Strong hold on data structures & algorithms What would be a bonus Prior experience working in a fast-growth Startup Prior experience in the payments, fraud, lending, advertising companies dealing with large scale data

Job posted by
apply for job
apply for job
Hizam Ismail picture
Hizam Ismail
Job posted by
Hizam Ismail picture
Hizam Ismail
Apply for job
apply for job

DevOps Engineer
DevOps Engineer

Founded 2017
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 10 years
Experience icon
Best in industry4 - 15 lacs/annum

• Works closely with the development team, technical lead, and Solution Architects within theEngineering group to plan ongoing feature development, product maintenance.• Familiar with Virtualization, Containers - Kubernetes, Core Networking, Cloud NativeDevelopment, Platform as a Service – Cloud Foundry, Infrastructure as a Service, DistributedSystems etc• Implementing tools and processes for deployment, monitoring, alerting, automation, scalability,and ensuring maximum availability of server infrastructure• Should be able to manage distributed big data systems such as hadoop, storm, mongoDB,elastic search and cassandra etc.,• Troubleshooting multiple deployment servers, Software installation, Managing licensing etc,.• Plan, coordinate, and implement network security measures in order to protect data, software, andhardware.• Monitor the performance of computer systems and networks, and to coordinate computer networkaccess and use.• Design, configure and test computer hardware, networking software, and operating systemsoftware.• Recommend changes to improve systems and network configurations, and determine hardware orsoftware requirements related to such changes.

Job posted by
apply for job
apply for job
Anurag Mahanta picture
Anurag Mahanta
Job posted by
Anurag Mahanta picture
Anurag Mahanta
Apply for job
apply for job

Senior Artificial Intelligence/Machine Learning Engineer
Senior Artificial Intelligence/Machine Learning Engineer

Founded 2017
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 10 years
Experience icon
Best in industry6 - 12 lacs/annum

Responsibilities :- Define the short-term tactics and long-term technology strategy.- Communicate that technical vision to technical and non-technical partners, customers and investors.- Lead the development of AI/ML related products as it matures into lean, high performing agile teams.- Scale the AI/ML teams by finding and hiring the right mix of on-shore and off-shore resources.- Work collaboratively with the business, partners, and customers to consistently deliver business value.- Own the vision and execution of developing and integrating AI & machine learning into all aspects of the platform.- Drive innovation through the use of technology and unique ways of applying it to business problems.Experience and Qualifications :- Masters or Ph.D. in AI, computer science, ML, electrical engineering or related fields (statistics, applied math, computational neuroscience)- Relevant experience leading & building teams establishing technical direction- A well-developed portfolio of past software development, composed of some mixture of professional work, open source contributions, and personal projects.- Experience in leading and developing remote and distributed teams- Think strategically and apply that through to innovative solutions- Experience with cloud infrastructure- Experience working with machine learning, artificial intelligence, and large datasets to drive insights and business value- Experience in agents architecture, deep learning, neural networks, computer vision and NLP- Experience with distributed computational frameworks (YARN, Spark, Hadoop)- Proficiency in Python, C++. Familiarity with DL frameworks (e.g. neon, TensorFlow, Caffe, etc.)Personal Attributes :- Excellent communication skills- Strong fit with the culture- Hands-on approach, self-motivated with a strong work ethic- Ability to learn quickly (technology, business models, target industries)- Creative and inspired.Superpowers we love :- Entrepreneurial spirit and a vibrant personality- Experience with lean startup build-measure-learn cycle- Vision for AI- Extensive understanding of why things are done the way they are done in agile development.- A passion for adding business valueNote: Selected candidate will be offered ESOPs too.Employment Type : Full TimeSalary : 8-10 Lacs + ESOPFunction : Systems/Product SoftwareExperience : 3 - 10 Years

Job posted by
apply for job
apply for job
Layak Singh picture
Layak Singh
Job posted by
Layak Singh picture
Layak Singh
Apply for job
apply for job

Full-Stack Developer
Full-Stack Developer

Founded 2014
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 4 years
Experience icon
Best in industry3 - 8 lacs/annum

About us: Intugine Technologies is a Bangalore based startup solving the problem of tracking Market-based Vehicles.  Role: This Full-Stack Developer position is responsible for independently analyzing, designing and developing Dashboard solutions to meet and exceed client reporting needs using Agile development methodology. Job Responsibilities:  1. Designs, analyzes, develop innovative visualizations to meets the client reporting needs.  2. Ability to create Dashboards and advanced reports using effective data visualization techniques (grid, graphs and other forms of reports) using d3, nvd3  3. Hands on experience with React/AngularJS  4. Proficient in Microservices, web services, Data structures, RDBMS, MySQL, MongoDB. Role Requirements:   JavaScript (AngularJS, Node.js, Ext.js)   Responsive Web Design   Backend Knowledge of Java, Hadoop, REST Web Services, APIs, Modern  Database

Job posted by
apply for job
apply for job
Ayush Agrawal picture
Ayush Agrawal
Job posted by
Ayush Agrawal picture
Ayush Agrawal
Apply for job
apply for job

Data Engineer
Data Engineer

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 7 years
Experience icon
Best in industry15 - 40 lacs/annum

Designation: Senior Data Engineer Experience: 6+ years Work Location: Bangalore About the role: At Bizongo, we believe in delivering excellence which drives business efficiency for our customers. As a Data Engineer at Bizongo, you will be working on developing the next generation of technology that will impact how businesses take care of their process and derive process efficiency. We are looking for engineers who can bring fresh ideas, function at scale and are passionate about technology. We expect our engineers to be multidimensional, display leadership and have a zeal for learning as well as experimentation as we push business efficiency through our technology. As a Data Engineer, You’ll be provided with enough freedom and opportunity to explore/analyse and build data pipeline and the data analytics from the scratch. Roles and Responsibilities Help to improve our Data Models Constantly thinking of building data services that are consumed by Backend Services Aligning database architecture with business requirements Help and review database architecture along with the Backend Development team Coming up with data warehouse platform for reporting purposes. Figuring out inconsistencies and anomalies in our current data model systems and Organising unstructured data - setting up the logging process and building analytics on the same Must-haves: Excellent in data structures, analyzing and solving problems B.Tech/B.E. in Computer Science and Engineering Write scalable, production level services Thorough with using Git Performance optimization, SQL tuning, caching techniques Understanding of how the web works in general Knowledge of creating fault­-tolerant, extensible, reusable architecture Passionate to work in a start-up It would be a plus if: You understand prevalent design patterns You have a fair knowledge of Backend and Frontend development You have fair soft skills Why work with us? Opportunity to work with "India’s leading B2B" E-commerce venture. The company grew its revenue by more than 12x last year to reach to a 200 Cr annual revenue run rate scale. We invite you to be part of the upcoming growth story of B2B sector through Bizongo. Opportunity to work with most dynamic individuals in Asia recognised under Forbes 30 Under 30 and industry stalwarts from across companies like Microsoft, Paypal, Gravitas, Parksons, ITC, Snapdeal, Fedex, Deloitte and HUL. Working in Bizongo translates into being a part of a dynamic start-up with some of the most enthusiastic, hardworking and intelligent people in a fast paced and electrifying environment. Bizongo has been awarded as the most Disruptive Procurement Startup of the year - 2017. This year ,Bizongo was awarded the highly prestigious & internationally acclaimed 'The DieLine Award' under the Sustainable Packaging category. Being a company that is expanding itself every day and working towards exploring newer avenues in the market, every employee grows with the company. The position provides a chance to build on existing talents, learn new skills and gain valuable experience in the field of Ecommerce. About the Company: Company Website: https://www.bizongo.in Any solution worth anything is unfailingly preceded by clear articulation of a problem worth solving. Even a modest study of Indian Packaging industry would lead someone to observe the enormous fragmentation, chaos and rampant unreliability pervading the ecosystem. When businesses are unable to cope even with these basic challenges, how can they even think of materializing an eco-friendly & resource-efficient packaging economy? These are some hardcore problems with real-world consequences which our country is hard-pressed to solve. Bizongo was conceived as an answer to these first level challenges of disorganization in the industry. We employed technology to build a business model that can streamline the packaging value-chain & has enormous potential to scale sustainably. Our potential to fill this vacuum was recognized early on by Accel Partners and IDG Ventures who jointly led our Series A funding. Most recently, B Capital group, a global tech fund led by Facebook co-founder Mr. Eduardo Savarin, invested in our technological capabilities when it jointly led our Series B funding with IFC. The International Finance Corporation (IFC), the private-sector investment arm of the World Bank, cited our positive ecosystem impact towards the network of 30,000 SMEs operating in the packaging industry, as one of the core reasons for their investment decision. Beyond these bastions of support, we are extremely grateful to have found validation by various authoritative institutions including Forbes 30 Under 30 Asia. Being the only major B2B player in the country with such an unprecedented model has lent us enormous scope of experimentation in our efforts to break new grounds. Dreaming and learning together thus, we have grown from a team of 3, founded in 2015, to a 250+ strong family with office presence across Mumbai, Gurgaon and Bengaluru. So those who strive for opportunities to rise above their own limitations, who seek to build an ecosystem of positive change and to find remarkable solutions to challenges where none existed before, such creators would find a welcome abode in Bizongo.

Job posted by
apply for job
apply for job
Sucheta Jadhav picture
Sucheta Jadhav
Job posted by
Sucheta Jadhav picture
Sucheta Jadhav
Apply for job
apply for job

Backend Engineer - Java/Scala/Distributed System/NoSQL
Backend Engineer - Java/Scala/Distributed System/NoSQL

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 7 years
Experience icon
Best in industry8 - 9 lacs/annum

Systems EngineerAbout Intellicar Telematics Pvt LtdIntellicar Telematics Private Limited is a vehicular telematics organization founded in 2015 with the vision of connecting businesses and customers to their vehicles in a meaningful way. We provide vehicle owners with the ability to connect and diagnose vehicles remotely in real-time. Our team consists of individuals with an in-depth knowledge and understanding in automotive engineering, driver analytics and information technology. By leveraging our expertise in the automotive domain, we have created solutions to reduce operational and maintenance costs of large fleets, and ensure safety at all times.Solutions :- Enterprise Fleet Management, GPS Tracking- Remote engine diagnostics, Driver behavior & training- Technology Integration : GIS, GPS, GPRS, OBD, WEB, Accelerometer, RFID, On-board Storage.Intellicar's team of accomplished automotive Engineers, hardware manufacturers, Software Developers and Data Scientists have developed the best solutions to track vehicles and drivers, and ensure optimum performance, utilization and safety at all times.We cater to the needs of our clients across various industries such as: Self drive cars, Taxi cab rentals, Taxi cab aggregators, Logistics, Driver training, Bike Rentals, Construction, ecommerce, armored trucks, Manufacturing, dealership and more. Desired skills as a developer :- Education: BE/B.Tech in Computer Science or related field.- 4+ years of experience with scalable distributed systems applications and building scalable multi-threaded server applications.- Strong programming skills in Java or Scala on Linux or a Unix based OS.- Understanding of distributed systems like Hadoop, Spark, Cassandra, Kafka.- Good understanding of HTTP, SQL, Database internals.- Good understanding of Internet and how it works- Create new features from scratch, enhance existing features and optimize existing functionality, from conception and design through testing and deployment.- Work on projects that make our network more stable, faster, and secure.- Work with our development QA and system QA teams to come up with regression tests that cover new changes to our software

Job posted by
apply for job
apply for job
Lata Patil picture
Lata Patil
Job posted by
Lata Patil picture
Lata Patil
Apply for job
apply for job

Big Data/Java Programming
Big Data/Java Programming

Founded 2007
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 9 years
Experience icon
Best in industry3 - 9 lacs/annum

What You'll Do :- Develop analytic tools, working on BigData and Distributed Environment. Scalability will be the key- Provide architectural and technical leadership on developing our core Analytic platform- Lead development efforts on product features on Java- Help scale our mobile platform as we experience massive growthWhat we Need :- Passion to build analytics & personalisation platform at scale- 3 to 9 years of software engineering experience with product based company in data analytics/big data domain- Passion for the Designing and development from the scratch.- Expert level Java programming and experience leading full lifecycle of application Dev.- Exp in Analytics, Hadoop, Pig, Hive, Mapreduce, ElasticSearch, MongoDB is an additional advantage- Strong communication skills, verbal and written

Job posted by
apply for job
apply for job
khushboo jain picture
khushboo jain
Job posted by
khushboo jain picture
khushboo jain
Apply for job
apply for job

PySpark Developer
PySpark Developer

via IQVIA
Founded 1969
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 8 years
Experience icon
Best in industry10 - 17 lacs/annum

1. Advanced Py-Spark(Python + Spark) – 5-7 years of experience in Python - Must 2. Distributed Processing(Cloudera Cluster experience, CDSW etc) – Good to have 3. Object Oriented Programming in Python - Must 4. Writing unit tests in Python - Must 5. Big Data skills like Hive, Hadoop, Map Reduce – Good to have 6. Good knowledge of Git(branching, merging, regular commits) - Must 7. Software Dev Experience- Must 8. Best coding practices- Must 9. Prod-Ops Knowledge(Nice to have) 10. Experience in leading teams – Senior developer and should be able to lead team in future 11. Continuous integration and continuous delivery – Good to have 12. Agile - Must

Job posted by
apply for job
apply for job
Ambili Sasidharan picture
Ambili Sasidharan
Job posted by
Ambili Sasidharan picture
Ambili Sasidharan
Apply for job
apply for job

Senior Software Engineer (Java/Scala/SOLR/Data/Hadoop)
Senior Software Engineer (Java/Scala/SOLR/Data/Hadoop)

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 8 years
Experience icon
Best in industry10 - 15 lacs/annum

Interested in building high performance search systems to handle petabytes of retail data, while working in an agile, small company environment? At CodeHall Technologies, you will have the opportunity to work with the newest technology in Search and Browse.  We are working on systems that powers and personalizes site search, considering the user intent for every query, providing a wholly unique search experience that is engaging - designed to display the most relevant results through Findability.  Primary responsibilities:   Building high performance Search systems for personalization, optimization, and targeting Building systems with Hadoop, Solr, Cassandra, Flink, Spark, Mongo DB Deep understanding of HTTP and REST principles Good diagnostic and troubleshooting skills… Unit testing with JUnit, Performance testing and tuning Working with rapid and innovative development methodologies like: Kanban, Continuous Integration and Daily deployments Highly proficient Software engineering skills in Java Coordination with internal and external teams Mentoring junior engineers Participate in Product design discussions and decisions Minimum requirements: BS/MS in CS, Electrical Engineering or foreign equivalent plus relevant software development experience At least 5-8 years of software development experience Expert in Java, Scala or any other object oriented language Proficient in SQL concepts (HiveQL or Postgres a plus) Additional language skills for scripting and rapid application development Desired skills and experience: Working with large data sets in the PBs Familiarity with UNIX (systems skills a plus) Working experience in Solr, Cassandra, Mongo DB, and Hadoop Working in a distributed environment and has dealt with challenges around scaling and performance Proven ability to project and meet scheduled deadlines Self-driven, quick learner with attention to detail and quality

Job posted by
apply for job
apply for job
Avneesh Jain picture
Avneesh Jain
Job posted by
Avneesh Jain picture
Avneesh Jain
Apply for job
apply for job

DevOps Engineer
DevOps Engineer

Founded 1998
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 8 years
Experience icon
Best in industry10 - 40 lacs/annum

What is the Job like?We are looking for a talented individual to join our DevOps and Platforms Engineering team. You will play an important role in helping build and run our globally distributed infrastructure stack and platforms. Technologies you can expect to work on every day include Linux, AWS, MySQL/PostgreSQL, MongoDB, Hadoop/HBase, ElasticSearch, FreeSwitch, Jenkins, Nagios, and CFEngine amongst others.Responsibilities:- * Troubleshoot and fix production outages and performance issues in our AWS/Linux infrastructure stack* Build automation tools for provisioning and managing our cloud infrastructure by leveraging the AWS API for EC2, S3, CloudFront, RDS and Route53 amongst others* Contribute to enhancing and managing our continuous delivery pipeline* Proactively seek out opportunities to improve monitoring and alerting of our hosts and services, and implement them in a timely fashion* Code scripts and tools to collect and visualize metrics from linux hosts and JVM applications* Enhance and maintain our logs collection, processing and visualization infrastructure* Automate systems configuration by writing policies and modules for configuration management tools* Write both frontend (html/css/js) and backend code (Python, Ruby, Perl)* Participate in periodic oncall rotations for devopsSkills:- * DevOps/System Admin experience ranging between 3-4 years* In depth Linux/Unix knowledge, good understanding the various linux kernel subsystems (memory, storage, network etc)* DNS, TCP/IP, Routing, HA & Load Balancing* Configuration management using tools like CFEngine, Puppet or Chef* SQL and NoSQL databases like MySQL, PostgreSQL, MongoDB and HBase* Build and packaging tools like Jenkins and RPM/Yum* HA and Load balancing using tools like the Elastic Load Balancer and HAProxy* Monitoring tools like Nagios, Pingdom or similar* Log management tools like logstash, fluentd, syslog, elasticsearch or similar* Metrics collection tools like Ganglia, Graphite, OpenTSDB or similar* Programming in a high level language like Python or Ruby

Job posted by
apply for job
apply for job
Richa Pancholy picture
Richa Pancholy
Job posted by
Richa Pancholy picture
Richa Pancholy
Apply for job
apply for job