Loading...

{{notif_text}}

Work at top Indian companies and global startups in 2020 - Check it out

Hadoop Jobs in Bangalore (Bengaluru)

Explore top Hadoop Job opportunities in Bangalore (Bengaluru) for Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.

Data Engineer
Data Engineer

Founded 1993
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Anywhere
Experience icon
3 - 10 years
Experience icon
Best in industry10 - 32 lacs/annum

Data Engineering role at ThoughtWorks   ThoughtWorks India is looking for talented data engineers passionate about building large scale data processing systems to help manage the ever-growing information needs of our clients. Our developers have been contributing code to major organizations and open source projects for over 25 years now. They’ve also been writing books, speaking at conferences, and helping push software development forward -- changing companies and even industries along the way. As Consultants, we work with our clients to ensure we’re delivering the best possible solution. Our Lead Dev plays an important role in leading these projects to success.   You will be responsible for - Creating complex data processing pipelines, as part of diverse, high energy teams Designing scalable implementations of the models developed by our Data Scientists Hands-on programming based on TDD, usually in a pair programming environment Deploying data pipelines in production based on Continuous Delivery practices   Ideally, you should have -  2-6 years of overall industry experience Minimum of 2 years of experience building and deploying large scale data processing pipelines in a production environment Strong domain modelling and coding experience in Java /Scala / Python. Experience building data pipelines and data centric applications using distributed storage platforms like HDFS, S3, NoSql databases (Hbase, Cassandra, etc) and distributed processing platforms like Hadoop, Spark, Hive, Oozie, Airflow, Kafka etc in a production setting Hands on experience in (at least one or more) MapR, Cloudera, Hortonworks and/or Cloud (AWS EMR, Azure HDInsights, Qubole etc.) Knowledge of software best practices like Test-Driven Development (TDD) and Continuous Integration (CI), Agile development Strong communication skills with the ability to work in a consulting environment is essential   And here’s some of the perks of being part of a unique organization like ThoughtWorks: A real commitment to “changing the face of IT” -- our way of thinking about diversity and inclusion. Over the past ten years, we’ve implemented a lot of initiatives to make ThoughtWorks a place that reflects the world around us, and to make this a welcoming home to technologists of all stripes. We’re not perfect, but we’re actively working towards true gender balance for our business and our industry, and you’ll see that diversity reflected on our project teams and in offices. Continuous learning. You’ll be constantly exposed to new languages, frameworks and ideas from your peers and as you work on different projects -- challenging you to stay at the top of your game. Support to grow as a technologist outside of your role at ThoughtWorks. This is why ThoughtWorkers have written over 100 books and can be found speaking at (and, ahem, keynoting) tech conferences all over the world. We love to learn and share knowledge, and you’ll find a community of passionate technologists eager to back your endeavors, whatever they may be. You’ll also receive financial support to attend conferences every year. An organizational commitment to social responsibility. ThoughtWorkers challenge each other to be just a little more thoughtful about the world around us, and we believe in using our profits for good. All around the world, you’ll find ThoughtWorks supporting great causes and organizations in both official and unofficial capacities.   If you relish the idea of being part of ThoughtWorks’ Data Practice that extends beyond the work we do for our customers, you may find ThoughtWorks is the right place for you. If you share our passion for technology and want to help change the world with software, we want to hear from you!

Job posted by
apply for job
apply for job
Suresh Teegireddy picture
Suresh Teegireddy
Job posted by
Suresh Teegireddy picture
Suresh Teegireddy
Apply for job
apply for job

Data Engineer
Data Engineer

Founded 2017
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 5 years
Experience icon
Best in industry15 - 28 lacs/annum

Job Description We are looking for a Data Engineer that will be responsible for collecting, storing, processing, and analyzing huge sets of data that is coming from different sources. Responsibilities Working with Big Data tools and frameworks to provide requested capabilities Identify development needs in order to improve and streamline operations Develop and manage BI solutions Implementing ETL process and Data Warehousing Monitoring performance and managing infrastructure Skills Proficient understanding of distributed computing principles Proficiency with Hadoop and Spark Experience with building stream-processing systems, using solutions such as Kafka and Spark-Streaming Good knowledge of Data querying tools SQL and Hive Knowledge of various ETL techniques and frameworks Experience with Python/Java/Scala (at least one) Experience with cloud services such as AWS or GCP Experience with NoSQL databases, such as DynamoDB,MongoDB will be an advantage Excellent written and verbal communication skills

Job posted by
apply for job
apply for job
Keerthana k picture
Keerthana k
Job posted by
Keerthana k picture
Keerthana k
Apply for job
apply for job

Data Engineer
Data Engineer
at Uber

via Uber
Founded 2012
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
4 - 8 years
Experience icon
Best in industry25 - 50 lacs/annum

About the Role If you are interested in building large scale data pipelines that impacts how Uber makes decisions about Rider lifecycle and experience, join the Rider Data Platform team. Uber collects petabyte scale analytics data from the different Ride booking apps. Help us build the software systems and data models that will enable data scientists reason about user behavior and build models for consumption by different rider facing program teams. What You'll Do Identify unified data models collaborating with Data Science teams Streamline data processing of the original event sources and consolidate them in source of truth event logs Build and maintain real-time/batch data pipelines that can consolidate and clean up usage analytics Build systems that monitor data losses from the mobile sources Devise strategies to consolidate and compensate the data losses by correlating different sources Solve challenging data problems with cutting edge design and algorithms What You'll Need 4+ years experience in a competitive engineering environment Design: Knowledge of data structures and an eye for design. You can discuss the tradeoff between design choices, both on a theoretical level and on an applied level. Strong coding/debugging abilities: You have advanced knowledge of at least one programming language, and are happy to learn more. Our core languages are Java, Python, and Scala. Big data: Experience with Distributed systems such as Hadoop, Hive, Spark, Kafka is preferred. Data pipeline: Strong understanding in SQL, Database. Experience in building data pipelines is a great plus. Love getting your hands dirty with the data implementing custom ETLs to shape it into information. A team player: You believe that you can achieve more on a team that the whole is greater than the sum of its parts. You rely on others' candid feedback for continuous improvement. Business acumen: You understand requirements beyond the written word. Whether you're working on an API used by other developers, an internal tool consumed by our operation teams, or a feature used by millions of customers, your attention to details leads to a delightful user experience. About the Team Rider Data Platform team is a relatively new team tasked with shaping up the future architecture of Uber's Rider Data Stack. We are a bunch of engineers passionate about helping Uber grow by focusing our energy on building the next gen data platform to provide insights to the global Rider data in the most optimal manner. This would be instrumental in identifying gaps in the current implementation as well as formulating the key strategies for overall Rider experience. Uber At Uber, we ignite opportunity by setting the world in motion. We take on big problems to help drivers, riders, delivery partners, and eaters get moving in more than 600 cities around the world. We welcome people from all backgrounds who seek the opportunity to help build a future where everyone and everything can move independently. If you have the curiosity, passion, and collaborative spirit, work with us, and let's move the world forward, together.

Job posted by
apply for job
apply for job
Suvidha Chib picture
Suvidha Chib
Job posted by
Suvidha Chib picture
Suvidha Chib
Apply for job
apply for job

Data Engineer
Data Engineer

Founded
Products and services{{j_company_types[ - 1]}}
{{j_company_sizes[ - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 5 years
Experience icon
Best in industry7 - 13 lacs/annum

Qualifications for Big Data Engineer: We are looking for a candidate with 2+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Information Systems or another quantitative field. They should also have experience using the following software/tools:  Experience with big data tools: Hadoop, Spark, Kafka, Hive etc.  Experience with relational SQL and NoSQL databases.  Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.  Experience with AWS cloud services: EC2, EMR, RDS, Redshift  Experience with stream-processing systems: Storm, Spark-Streaming, etc.  Experience with object-oriented/object function scripting languages: Java, Scala, python etc.

Job posted by
apply for job
apply for job
Biswajit Sahu picture
Biswajit Sahu
Job posted by
Biswajit Sahu picture
Biswajit Sahu
Apply for job
apply for job

ETL Talend developer
ETL Talend developer

Founded 2011
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 19 years
Experience icon
Best in industry10 - 30 lacs/annum

Strong exposure in ETL / Big Data / Talend / Hadoop / Spark / Hive / Pig To be considered as a candidate for a Senior Data Engineer position, a person must have a proven track record of architecting data solutions on current and advanced technical platforms. They must have leadership abilities to lead a team providing data centric solutions with best practices and modern technologies in mind. They look to build collaborative relationships across all levels of the business and the IT organization. They possess analytic and problem-solving skills and have the ability to research and provide appropriate guidance for synthesizing complex information and extract business value. Have the intellectual curiosity and ability to deliver solutions with creativity and quality. Effectively work with business and customers to obtain business value for the requested work. Able to communicate technical results to both technical and non-technical users using effective story telling techniques and visualizations. Demonstrated ability to perform high quality work with innovation both independently and collaboratively.

Job posted by
apply for job
apply for job
Shobha B K picture
Shobha B K
Job posted by
Shobha B K picture
Shobha B K
Apply for job
apply for job

Data Engineer
Data Engineer

Founded 2004
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 5 years
Experience icon
Best in industry5 - 8 lacs/annum

Who we are? Searce is a Cloud, Automation & Analytics led business transformation company focussed on helping futurify businesses. We help our clients become successful by helping reimagine ‘what's next’ and then enabling them to realize that ‘now’. We processify, saasify, innovify & futurify businesses by leveraging Cloud | Analytics | Automation | BPM. What we believe? Best practices are overrated Implementing best practices can only make one ‘average’. Honesty and Transparency We believe in naked truth. We do what we tell and tell what we do. Client Partnership Client - Vendor relationship: No. We partner with clients instead.  And our sales team comprises of 100% of our clients. How we work? It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER. Humble: Happy people don’t carry ego around. We listen to understand; not to respond. Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about. Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it. Passionate: We are as passionate about the great vada-pao vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver. Innovative: Innovate or Die. We love to challenge the status quo. Experimental: We encourage curiosity & making mistakes. Responsible: Driven. Self-motivated. Self-governing teams. We own it. We welcome *really unconventional* creative thinkers who can work in an agile, flexible environment. We are a flat organization with unlimited growth opportunities, and small team sizes – wherein flexibility is a must, mistakes are encouraged, creativity is rewarded, and excitement is required. Introduction When was the last time you thought about rebuilding your smartphone charger using solar panels on your backpack OR changed the sequencing of switches in your bedroom (on your own, of course) to make it more meaningful OR pointed out an engineering flaw in the sequencing of traffic signal lights to a fellow passenger, while he gave you a blank look? If the last time this happened was more than 6 months ago, you are a dinosaur for our needs. If it was less than 6 months ago, did you act on it? If yes, then let’s talk. We are quite keen to meet you if: You eat, dream, sleep and play with Cloud Data Store & engineering your processes on cloud architecture You have an insatiable thirst for exploring improvements, optimizing processes, and motivating people. You like experimenting, taking risks and thinking big. 3 things this position is NOT about: This is NOT just a job; this is a passionate hobby for the right kind. This is NOT a boxed position. You will code, clean, test, build and recruit and you will feel that this is not really ‘work’. This is NOT a position for people who like to spend time on talking more than the time they spend doing. 3 things this position IS about: Attention to detail matters. Roles, titles, the ego does not matter; getting things done matters; getting things done quicker and better matters the most. Are you passionate about learning new domains & architecting solutions that could save a company millions of dollars? Roles and Responsibilities Drive and define database design and development of real-time complex products. Strive for excellence in customer experience, technology, methodology, and execution. Define and own end-to-end Architecture from definition phase to go-live phase. Define reusable components/frameworks, common schemas, standards to be used & tools to be used and help bootstrap the engineering team. Performance tuning of application and database and code optimizations.  Define database strategy, database design & development standards and SDLC, database customization & extension patterns, database deployment and upgrade methods, database integration patterns, and data governance policies. Architect and develop database schema, indexing strategies, views, and stored procedures for Cloud applications. Assist in defining scope and sizing of work; analyze and derive NFRs, participate in proof of concept development. Contribute to innovation and continuous enhancement of the platform. Define and implement a strategy for data services to be used by Cloud and web-based applications. Improve the performance, availability, and scalability of the physical database, including database access layer, database calls, and SQL statements. Design robust cloud management implementations including orchestration and catalog capabilities. Architect and design distributed data processing solutions using big data technologies - added advantage. Demonstrate thought leadership in cloud computing across multiple channels and become a trusted advisor to decision-makers. Desired Skills Experience with Data Warehouse design, ETL (Extraction, Transformation & Load), architecting efficient software designs for DW platform. Hands-on experience in Big Data space (Hadoop Stack like M/R, HDFS, Pig, Hive, HBase, Flume, Sqoop, etc. Knowledge of NoSQL stores is a plus). Knowledge of other transactional Database Management Systems/Open database system and NoSQL database (MongoDB, Cassandra, Hbase etc.) is a plus. Good knowledge of data management principles like Data Architecture, Data Governance, Very Large Database Design (VLDB), Distributed Database Design, Data Replication, and High Availability. Must have experience in designing large-scale, highly available, fault-tolerant OLTP data management systems. Solid knowledge of any one of the industry-leading RDBMS like Oracle/SQL Server/DB2/MySQL etc. Expertise in providing data architecture solutions and recommendations that are technology-neutral. Experience in Architecture consulting engagements is a plus. Deep understanding of technical and functional designs for Databases, Data Warehousing, Reporting, and Data Mining areas. Education & Experience Bachelors in Engineering or Computer Science (preferably from a premier School) - Advanced degree in Engineering, Mathematics, Computer or Information Technology. Highly analytical aptitude and a strong ‘desire to deliver’ outlives those fancy degrees! More so if you have been a techie from 12. 2-5 years of experience in database design & development 0- Years experience  of AWS or Google Cloud Platform or Hadoop experience Experience working in a hands-on, fast-paced, creative entrepreneurial environment in a cross-functional capacity.

Job posted by
apply for job
apply for job
Vishal Jarsania picture
Vishal Jarsania
Job posted by
Vishal Jarsania picture
Vishal Jarsania
Apply for job
apply for job

Data Engineer
Data Engineer
at Rely

via Rely
Founded 2018
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 10 years
Experience icon
Best in industry8 - 35 lacs/annum

Intro Our data and risk team is the core pillar of our business that harnesses alternative data sources to guide the decisions we make at Rely. The team designs, architects, as well as develop and maintain a scalable data platform the powers our machine learning models. Be part of a team that will help millions of consumers across Asia, to be effortlessly in control of their spending and make better decisions. What will you doThe data engineer is focused on making data correct and accessible, and building scalable systems to access/process it. Another major responsibility is helping AI/ML Engineers write better code.• Optimize and automate ingestion processes for a variety of data sources such as: click stream, transactional and many other sources. Create and maintain optimal data pipeline architecture and ETL processes Assemble large, complex data sets that meet functional / non-functional business requirements. Develop data pipeline and infrastructure to support real-time decisions Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data' technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs. What will you need• 2+ hands-on experience building and implementation of large scale production pipeline and Data Warehouse• Experience dealing with large scale Proficiency in writing and debugging complex SQLs Experience working with AWS big data tools• Ability to lead the project and implement best data practises and technology Data Pipelining Strong command in building & optimizing data pipelines, architectures and data sets Strong command on relational SQL & noSQL databases including Postgres Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. Big Data: Strong experience in big data tools & applications Tools: Hadoop, Spark, HDFS etc AWS cloud services: EC2, EMR, RDS, Redshift Stream-processing systems: Storm, Spark-Streaming, Flink etc. Message queuing: RabbitMQ, Spark etc Software Development & Debugging Strong experience in object-oriented programming/object function scripting languages: Python, Java, C++, Scala, etc Strong hold on data structures & algorithms What would be a bonus Prior experience working in a fast-growth Startup Prior experience in the payments, fraud, lending, advertising companies dealing with large scale data

Job posted by
apply for job
apply for job
Hizam Ismail picture
Hizam Ismail
Job posted by
Hizam Ismail picture
Hizam Ismail
Apply for job
apply for job

DevOps Engineer
DevOps Engineer

Founded 2017
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 10 years
Experience icon
Best in industry4 - 15 lacs/annum

• Works closely with the development team, technical lead, and Solution Architects within theEngineering group to plan ongoing feature development, product maintenance.• Familiar with Virtualization, Containers - Kubernetes, Core Networking, Cloud NativeDevelopment, Platform as a Service – Cloud Foundry, Infrastructure as a Service, DistributedSystems etc• Implementing tools and processes for deployment, monitoring, alerting, automation, scalability,and ensuring maximum availability of server infrastructure• Should be able to manage distributed big data systems such as hadoop, storm, mongoDB,elastic search and cassandra etc.,• Troubleshooting multiple deployment servers, Software installation, Managing licensing etc,.• Plan, coordinate, and implement network security measures in order to protect data, software, andhardware.• Monitor the performance of computer systems and networks, and to coordinate computer networkaccess and use.• Design, configure and test computer hardware, networking software, and operating systemsoftware.• Recommend changes to improve systems and network configurations, and determine hardware orsoftware requirements related to such changes.

Job posted by
apply for job
apply for job
Anurag Mahanta picture
Anurag Mahanta
Job posted by
Anurag Mahanta picture
Anurag Mahanta
Apply for job
apply for job

Senior Artificial Intelligence/Machine Learning Engineer
Senior Artificial Intelligence/Machine Learning Engineer

Founded 2017
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 10 years
Experience icon
Best in industry6 - 12 lacs/annum

Responsibilities :- Define the short-term tactics and long-term technology strategy.- Communicate that technical vision to technical and non-technical partners, customers and investors.- Lead the development of AI/ML related products as it matures into lean, high performing agile teams.- Scale the AI/ML teams by finding and hiring the right mix of on-shore and off-shore resources.- Work collaboratively with the business, partners, and customers to consistently deliver business value.- Own the vision and execution of developing and integrating AI & machine learning into all aspects of the platform.- Drive innovation through the use of technology and unique ways of applying it to business problems.Experience and Qualifications :- Masters or Ph.D. in AI, computer science, ML, electrical engineering or related fields (statistics, applied math, computational neuroscience)- Relevant experience leading & building teams establishing technical direction- A well-developed portfolio of past software development, composed of some mixture of professional work, open source contributions, and personal projects.- Experience in leading and developing remote and distributed teams- Think strategically and apply that through to innovative solutions- Experience with cloud infrastructure- Experience working with machine learning, artificial intelligence, and large datasets to drive insights and business value- Experience in agents architecture, deep learning, neural networks, computer vision and NLP- Experience with distributed computational frameworks (YARN, Spark, Hadoop)- Proficiency in Python, C++. Familiarity with DL frameworks (e.g. neon, TensorFlow, Caffe, etc.)Personal Attributes :- Excellent communication skills- Strong fit with the culture- Hands-on approach, self-motivated with a strong work ethic- Ability to learn quickly (technology, business models, target industries)- Creative and inspired.Superpowers we love :- Entrepreneurial spirit and a vibrant personality- Experience with lean startup build-measure-learn cycle- Vision for AI- Extensive understanding of why things are done the way they are done in agile development.- A passion for adding business valueNote: Selected candidate will be offered ESOPs too.Employment Type : Full TimeSalary : 8-10 Lacs + ESOPFunction : Systems/Product SoftwareExperience : 3 - 10 Years

Job posted by
apply for job
apply for job
Layak Singh picture
Layak Singh
Job posted by
Layak Singh picture
Layak Singh
Apply for job
apply for job

Full-Stack Developer
Full-Stack Developer

Founded 2014
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 4 years
Experience icon
Best in industry3 - 8 lacs/annum

About us: Intugine Technologies is a Bangalore based startup solving the problem of tracking Market-based Vehicles.  Role: This Full-Stack Developer position is responsible for independently analyzing, designing and developing Dashboard solutions to meet and exceed client reporting needs using Agile development methodology. Job Responsibilities:  1. Designs, analyzes, develop innovative visualizations to meets the client reporting needs.  2. Ability to create Dashboards and advanced reports using effective data visualization techniques (grid, graphs and other forms of reports) using d3, nvd3  3. Hands on experience with React/AngularJS  4. Proficient in Microservices, web services, Data structures, RDBMS, MySQL, MongoDB. Role Requirements:   JavaScript (AngularJS, Node.js, Ext.js)   Responsive Web Design   Backend Knowledge of Java, Hadoop, REST Web Services, APIs, Modern  Database

Job posted by
apply for job
apply for job
Ayush Agrawal picture
Ayush Agrawal
Job posted by
Ayush Agrawal picture
Ayush Agrawal
Apply for job
apply for job

Data Engineer
Data Engineer

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 7 years
Experience icon
Best in industry15 - 40 lacs/annum

Designation: Senior Data Engineer Experience: 6+ years Work Location: Bangalore About the role: At Bizongo, we believe in delivering excellence which drives business efficiency for our customers. As a Data Engineer at Bizongo, you will be working on developing the next generation of technology that will impact how businesses take care of their process and derive process efficiency. We are looking for engineers who can bring fresh ideas, function at scale and are passionate about technology. We expect our engineers to be multidimensional, display leadership and have a zeal for learning as well as experimentation as we push business efficiency through our technology. As a Data Engineer, You’ll be provided with enough freedom and opportunity to explore/analyse and build data pipeline and the data analytics from the scratch. Roles and Responsibilities Help to improve our Data Models Constantly thinking of building data services that are consumed by Backend Services Aligning database architecture with business requirements Help and review database architecture along with the Backend Development team Coming up with data warehouse platform for reporting purposes. Figuring out inconsistencies and anomalies in our current data model systems and Organising unstructured data - setting up the logging process and building analytics on the same Must-haves: Excellent in data structures, analyzing and solving problems B.Tech/B.E. in Computer Science and Engineering Write scalable, production level services Thorough with using Git Performance optimization, SQL tuning, caching techniques Understanding of how the web works in general Knowledge of creating fault­-tolerant, extensible, reusable architecture Passionate to work in a start-up It would be a plus if: You understand prevalent design patterns You have a fair knowledge of Backend and Frontend development You have fair soft skills Why work with us? Opportunity to work with "India’s leading B2B" E-commerce venture. The company grew its revenue by more than 12x last year to reach to a 200 Cr annual revenue run rate scale. We invite you to be part of the upcoming growth story of B2B sector through Bizongo. Opportunity to work with most dynamic individuals in Asia recognised under Forbes 30 Under 30 and industry stalwarts from across companies like Microsoft, Paypal, Gravitas, Parksons, ITC, Snapdeal, Fedex, Deloitte and HUL. Working in Bizongo translates into being a part of a dynamic start-up with some of the most enthusiastic, hardworking and intelligent people in a fast paced and electrifying environment. Bizongo has been awarded as the most Disruptive Procurement Startup of the year - 2017. This year ,Bizongo was awarded the highly prestigious & internationally acclaimed 'The DieLine Award' under the Sustainable Packaging category. Being a company that is expanding itself every day and working towards exploring newer avenues in the market, every employee grows with the company. The position provides a chance to build on existing talents, learn new skills and gain valuable experience in the field of Ecommerce. About the Company: Company Website: https://www.bizongo.in Any solution worth anything is unfailingly preceded by clear articulation of a problem worth solving. Even a modest study of Indian Packaging industry would lead someone to observe the enormous fragmentation, chaos and rampant unreliability pervading the ecosystem. When businesses are unable to cope even with these basic challenges, how can they even think of materializing an eco-friendly & resource-efficient packaging economy? These are some hardcore problems with real-world consequences which our country is hard-pressed to solve. Bizongo was conceived as an answer to these first level challenges of disorganization in the industry. We employed technology to build a business model that can streamline the packaging value-chain & has enormous potential to scale sustainably. Our potential to fill this vacuum was recognized early on by Accel Partners and IDG Ventures who jointly led our Series A funding. Most recently, B Capital group, a global tech fund led by Facebook co-founder Mr. Eduardo Savarin, invested in our technological capabilities when it jointly led our Series B funding with IFC. The International Finance Corporation (IFC), the private-sector investment arm of the World Bank, cited our positive ecosystem impact towards the network of 30,000 SMEs operating in the packaging industry, as one of the core reasons for their investment decision. Beyond these bastions of support, we are extremely grateful to have found validation by various authoritative institutions including Forbes 30 Under 30 Asia. Being the only major B2B player in the country with such an unprecedented model has lent us enormous scope of experimentation in our efforts to break new grounds. Dreaming and learning together thus, we have grown from a team of 3, founded in 2015, to a 250+ strong family with office presence across Mumbai, Gurgaon and Bengaluru. So those who strive for opportunities to rise above their own limitations, who seek to build an ecosystem of positive change and to find remarkable solutions to challenges where none existed before, such creators would find a welcome abode in Bizongo.

Job posted by
apply for job
apply for job
Sucheta Jadhav picture
Sucheta Jadhav
Job posted by
Sucheta Jadhav picture
Sucheta Jadhav
Apply for job
apply for job

Backend Engineer - Java/Scala/Distributed System/NoSQL
Backend Engineer - Java/Scala/Distributed System/NoSQL

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 7 years
Experience icon
Best in industry8 - 9 lacs/annum

Systems EngineerAbout Intellicar Telematics Pvt LtdIntellicar Telematics Private Limited is a vehicular telematics organization founded in 2015 with the vision of connecting businesses and customers to their vehicles in a meaningful way. We provide vehicle owners with the ability to connect and diagnose vehicles remotely in real-time. Our team consists of individuals with an in-depth knowledge and understanding in automotive engineering, driver analytics and information technology. By leveraging our expertise in the automotive domain, we have created solutions to reduce operational and maintenance costs of large fleets, and ensure safety at all times.Solutions :- Enterprise Fleet Management, GPS Tracking- Remote engine diagnostics, Driver behavior & training- Technology Integration : GIS, GPS, GPRS, OBD, WEB, Accelerometer, RFID, On-board Storage.Intellicar's team of accomplished automotive Engineers, hardware manufacturers, Software Developers and Data Scientists have developed the best solutions to track vehicles and drivers, and ensure optimum performance, utilization and safety at all times.We cater to the needs of our clients across various industries such as: Self drive cars, Taxi cab rentals, Taxi cab aggregators, Logistics, Driver training, Bike Rentals, Construction, ecommerce, armored trucks, Manufacturing, dealership and more. Desired skills as a developer :- Education: BE/B.Tech in Computer Science or related field.- 4+ years of experience with scalable distributed systems applications and building scalable multi-threaded server applications.- Strong programming skills in Java or Scala on Linux or a Unix based OS.- Understanding of distributed systems like Hadoop, Spark, Cassandra, Kafka.- Good understanding of HTTP, SQL, Database internals.- Good understanding of Internet and how it works- Create new features from scratch, enhance existing features and optimize existing functionality, from conception and design through testing and deployment.- Work on projects that make our network more stable, faster, and secure.- Work with our development QA and system QA teams to come up with regression tests that cover new changes to our software

Job posted by
apply for job
apply for job
Lata Patil picture
Lata Patil
Job posted by
Lata Patil picture
Lata Patil
Apply for job
apply for job

Big Data/Java Programming
Big Data/Java Programming

Founded 2007
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 9 years
Experience icon
Best in industry3 - 9 lacs/annum

What You'll Do :- Develop analytic tools, working on BigData and Distributed Environment. Scalability will be the key- Provide architectural and technical leadership on developing our core Analytic platform- Lead development efforts on product features on Java- Help scale our mobile platform as we experience massive growthWhat we Need :- Passion to build analytics & personalisation platform at scale- 3 to 9 years of software engineering experience with product based company in data analytics/big data domain- Passion for the Designing and development from the scratch.- Expert level Java programming and experience leading full lifecycle of application Dev.- Exp in Analytics, Hadoop, Pig, Hive, Mapreduce, ElasticSearch, MongoDB is an additional advantage- Strong communication skills, verbal and written

Job posted by
apply for job
apply for job
khushboo jain picture
khushboo jain
Job posted by
khushboo jain picture
khushboo jain
Apply for job
apply for job

PySpark Developer
PySpark Developer

via IQVIA
Founded 1969
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 8 years
Experience icon
Best in industry10 - 17 lacs/annum

1. Advanced Py-Spark(Python + Spark) – 5-7 years of experience in Python - Must 2. Distributed Processing(Cloudera Cluster experience, CDSW etc) – Good to have 3. Object Oriented Programming in Python - Must 4. Writing unit tests in Python - Must 5. Big Data skills like Hive, Hadoop, Map Reduce – Good to have 6. Good knowledge of Git(branching, merging, regular commits) - Must 7. Software Dev Experience- Must 8. Best coding practices- Must 9. Prod-Ops Knowledge(Nice to have) 10. Experience in leading teams – Senior developer and should be able to lead team in future 11. Continuous integration and continuous delivery – Good to have 12. Agile - Must

Job posted by
apply for job
apply for job
Ambili Sasidharan picture
Ambili Sasidharan
Job posted by
Ambili Sasidharan picture
Ambili Sasidharan
Apply for job
apply for job

Senior Software Engineer (Java/Scala/SOLR/Data/Hadoop)
Senior Software Engineer (Java/Scala/SOLR/Data/Hadoop)

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 8 years
Experience icon
Best in industry10 - 15 lacs/annum

Interested in building high performance search systems to handle petabytes of retail data, while working in an agile, small company environment? At CodeHall Technologies, you will have the opportunity to work with the newest technology in Search and Browse.  We are working on systems that powers and personalizes site search, considering the user intent for every query, providing a wholly unique search experience that is engaging - designed to display the most relevant results through Findability.  Primary responsibilities:   Building high performance Search systems for personalization, optimization, and targeting Building systems with Hadoop, Solr, Cassandra, Flink, Spark, Mongo DB Deep understanding of HTTP and REST principles Good diagnostic and troubleshooting skills… Unit testing with JUnit, Performance testing and tuning Working with rapid and innovative development methodologies like: Kanban, Continuous Integration and Daily deployments Highly proficient Software engineering skills in Java Coordination with internal and external teams Mentoring junior engineers Participate in Product design discussions and decisions Minimum requirements: BS/MS in CS, Electrical Engineering or foreign equivalent plus relevant software development experience At least 5-8 years of software development experience Expert in Java, Scala or any other object oriented language Proficient in SQL concepts (HiveQL or Postgres a plus) Additional language skills for scripting and rapid application development Desired skills and experience: Working with large data sets in the PBs Familiarity with UNIX (systems skills a plus) Working experience in Solr, Cassandra, Mongo DB, and Hadoop Working in a distributed environment and has dealt with challenges around scaling and performance Proven ability to project and meet scheduled deadlines Self-driven, quick learner with attention to detail and quality

Job posted by
apply for job
apply for job
Avneesh Jain picture
Avneesh Jain
Job posted by
Avneesh Jain picture
Avneesh Jain
Apply for job
apply for job

Python Developer - Data Analytics Startup
Python Developer - Data Analytics Startup

Founded 2015
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 5 years
Experience icon
Best in industry4 - 10 lacs/annum

Crediwatch - Amplified Intelligence Crediwatch is in the business of extracting valuable insights by applying artificial intelligence and deep learning and delivering them to chief decision makers. Crediwatch believes in zero human touch and building systems to augment human intelligence. Role We are looking for smart dedicated resource with strong fundamentals and coding skills for the python engineering team at our data analytics startup. You will be responsible for design, dev and taking to production all the python based projects. You will drive the agile development process for the python based teams/projects. You will work on every level of the stack. You will partake in driving software standards and guidelines, performance analysis, benchmarking and detailed design of the system. Desired Candidate Profile You should possess the aptitude to take up any task, investigate independently and come up with multiple solutions highlighting the pros and cons for each. You need to have a healthy startup attitude. We are looking for candidates who have a solid understanding of  concepts, best practices and have a good approach to learning and problem solving. Experience At least 2 years of hands on python development experience, with sound knowledge of design patterns and application design Must Have Strong knowledge of Python - along with hands on experience in using Django Experience with NoSQL, specifically MongoDB Good understanding of queuing mechanisms including Redis/RabitMQ (or similar) Knowledge of python web crawling frameworks like scrapy, frontera is a must Strong Linux skills Knowledge of building large scale, multi threaded applications is a plus Nice to Have Experience designing and building RESTful web services Experience designing, configuring and implementing Hadoop/HDFS and HBase Knowledge of graph databases Experience with Elasticsearch Prior experience architecting large scale distributed systems Experience with cloud deployments on AWS/Azure/DO preferred Experience with unit testing. Desired mindset You should possess the aptitude to take up any task, investigate independently and come up with multiple solutions highlighting the pros and cons for each. You need to have a healthy startup attitude. We are looking for candidates who have a good approach to learning and problem solving. Just to Add We have a Creative Workplace and open work culture. Creativity and out of the box thinking is encouraged and nurtured. Some perks: Excellent Filter Coffee, Free lunches, PS4 and Fooseball breaks, Mobile/broadband allowances, freedom to sit in any corner of the office, stocked kitchen topped up with a nice set of people to work with!

Job posted by
apply for job
apply for job
Rohan Pannalkar picture
Rohan Pannalkar
Job posted by
Rohan Pannalkar picture
Rohan Pannalkar
Apply for job
apply for job

Senior Python Developer
Senior Python Developer

Founded 2015
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 7 years
Experience icon
Best in industry6 - 14 lacs/annum

Crediwatch offers the opporunity to work on the latest of technologies and is building a completely automated platform for curating public domain information to help build insights and take business decisions. Crediwatch is working with a host of clients across Banking, NBFC, Legal and Technology clients to present data and insights like never before. Crediwatch has received accolades in Citibank Tech4Integrity challenge (worldwide), Barclays Rise accelerator and Tech30 by YourStory to name a few. We are based in the heart of Bangalore and are growing fast. For the current opening, the following are the requirements Primary skills 1. At least 3 years of hands on development experience, with sound knowledge of design patterns and application design 2. Strong knowledge of Python - along with hands on experience in using Django 3. Experience with NoSQL, specifically MongoDB 4. Good understanding of queuing mechanisms including Redis/RabitMQ (or similar) 5. Knowledge of python web crawling frameworks like scrapy, frontera is a must 6. Strong Linux skills 7. Knowledge of building large scale, multi threaded applications is a plus Secondary skills 1. Experience designing and building RESTful web services 2. Experience designing, configuring and implementing Hadoop/HDFS and HBase is a plus 3. Knowledge of graph databases is a bonus 4. Experience with Elasticsearch is a plus 5. Prior experience architecting large scale distributed systems is a plus 6. Experience with cloud deployments on AWS/Azure preferred Roles & Responsibility 1. We are looking for a dedicated Sr. resource to lead the python driven dev team 2. Person will be responsible for design, dev and taking to production all the python based projects 3. Will also serve as a mentor - guiding and enabling the team to deliver 4. Will drive the agile development process for the python based teams/projects 5. Work on every level of the stack 6. Drive software standards and guidelines, performance analysis, benchmarking and detailed design of the system 7. Work in a highly agile, fast growing, startup environment

Job posted by
apply for job
apply for job
Rohan Pannalkar picture
Rohan Pannalkar
Job posted by
Rohan Pannalkar picture
Rohan Pannalkar
Apply for job
apply for job