Loading...

{{notif_text}}

Last chance to connect with exciting companies hiring right now - Register now!|L I V E{{days_remaining}} days {{hours_remaining}} hours left!

Datawarehousing Jobs in Bangalore (Bengaluru)

Explore top Datawarehousing Job opportunities in Bangalore (Bengaluru) for Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.

Data Modeler

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
8 - 12 years
Salary icon
Best in industry{{renderSalaryString({min: 1500000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

Data Modeller with Good Communication skills, AWS experience, cloudexperience. Banking project experience is additional. Basic Qualifications- Computer science or related engineering degree and 5+ years of professional experience as a Data Engineer/Data Architect/Lead Data Engineer- Experience with large volumes of structured and semi-structured data on the cloud,real-time and/or batch data processing and analytics- Business Intelligence experience (e.g. Data Warehousing, Dimensional design,Power BI, Business Objects, Qlik, Tableau)- Experience with various data modelling tools- Proficiency with AWS Ecosystem (Athena, EMR, Redshift, Kinesis, Aurora,DynamoDB, MSK, S3, RDS, Glue)- Experience in creating and maintaining ETL/ELT pipeline architecture- Expertise in relational and NoSQL databases, with strong skills with PostgreSQL- Experience with message brokers- Expertise in performance tuning, optimization- Experience in solving real time issues with index fragmentation, query

Job posted by
apply for job
apply for job
Harpreet kour picture
Harpreet kour
Job posted by
Harpreet kour picture
Harpreet kour
Apply for job
apply for job

Data Platform Engineer

Founded 2018
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Bengaluru (Bangalore)
Experience icon
5 - 7 years
Salary icon
Best in industry{{renderSalaryString({min: 1500000, max: 3000000, duration: "undefined", currency: "INR", equity: false})}}

At HypersoniX our platform technology is aimed to solve regular and persistent problem in data platform domain. We’ve established ourselves as a leading developer of innovative software solutions. We’re looking for a highly-skilled Data-Platform engineer to join our program and platform design team. Our ideal candidate will have expert knowledge of software development processes and solid experience in designing/developing/evaluating/troubleshooting data platform and data driven applications If finding issues and fixing them with beautiful, meticulous code are among the talents that make you tick, we’d like to hear from you. Objectives of this Role: • Design, and develop creative and innovative frameworks/components for data platforms, as we continue to experience dramatic growth in the usage and visibility of our products • work closely with data scientist and product owners to come up with better design/development approach for application and platform to scale and serve the needs. • Examine existing systems, identifying flaws and creating solutions to improve service uptime and time-to-resolve through monitoring and automated remediation • Plan and execute full software development life cycles (SDLC) for each assigned project, adhering to company standards and expectations Daily and Monthly Responsibilities: • Design and build tools/frameworks/scripts to automate development, testing deployment, management and monitoring of the company’s 24x7 services and products • Plan and scale distributed software and applications, applying synchronous and asynchronous design patterns, write code, and deliver with urgency and quality • Collaborate with global team, producing project work plans and analyzing the efficiency and feasibility of project operations, • manage large volume of data and process them on Realtime and batch orientation as needed. • while leveraging global technology stack and making localized improvements Track, document, and maintain software system functionality—both internally and externally, leveraging opportunities to improve engineering productivity • Code review, Git operation, CI-CD, Mentor and assign task to junior team membersResponsibilities:• Writing reusable, testable, and efficient code• Design and implementation of low-latency, high-availability, and performant applications• Integration of user-facing elements developed by front-end developers with server-side logic• Implementation of security and data protection• Integration of data storage solutions Skills and Qualifications • Bachelor’s degree in software engineering or information technology• 5-7 years’ experience engineering software and networking platforms • 5+ years professional experience with Python or Java or Scala. • Strong experience in API development and API integration. • proven knowledge on data migration, platform migration, CI-CD process, orchestration workflows like Airflow or Luigi or Azkaban etc. • Experience with data engineering tools and platforms such as Kafka, Spark, Databricks, Hadoop, No-SQl platform • Prior experience in Datawarehouse and OLAP design and deployment. • Proven ability to document design processes, including development, tests, analytics, and troubleshooting • Experience with rapid development cycles in a web-based/Multi Cloud environment • Strong scripting and test automation abilities Good to have Qualifications • Working knowledge of relational databases as well as ORM and SQL technologies • Proficiency with Multi OS env, Docker and Kubernetes • Proven experience designing interactive applications and largescale platforms • Desire to continue to grow professional capabilities with ongoing training and educational opportunities.

Job posted by
apply for job
apply for job
Manu Panwar picture
Manu Panwar
Job posted by
Manu Panwar picture
Manu Panwar
Apply for job
apply for job

Data Architect

Founded 2018
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
via Nu-Pie
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
10 - 18 years
Salary icon
Best in industry{{renderSalaryString({min: 1400000, max: 2500000, duration: "undefined", currency: "INR", equity: false})}}

Coordinate and provide experience-based solution for teams deploying business intelligence, data platform, and big data solutions. Strong knowledge of data warehousing and big data / analytics platform solutions and how data architecture fits into larger data warehousing and database implementation projects as a major component of the effort. Ability to guide C-level executives/ chief architects at major clients (Fortune 500) in data architecture decisions. Experience mapping out enterprise architecture transformations over a 3 year period, and leading those implementations. Develop, implement and maintain data architecture best practices and standards Utilizing data architecture best practices and standards, define and implement technical solutions in the movement of data throughout an organization Provide leadership in related technical areas of importance such as Business Intelligence Reporting and Analytics Gather requirements for data architecture through the use of business and technical interviews and round-table discussions Evaluate and make decisions regarding the alternative processes that can be followed in data movement throughout an organization: ETL, SOA / Web Services, Bulk Load, Evaluate and make decisions regarding the alternative tools and platforms that can be used to perform activities around data collection, data distribution and reporting Show experience with the concepts of data modeling for both transaction based systems and reporting / data warehouse type systems Evaluate data related requirements around data quality and master data management and understand and articulate how these factors apply to data architecture Understand the concepts of data quality, data ownership, and data governance, and understand how they apply within a data architecture framework 15+ years experience in IT, 10+ years experience with data related positions and responsibilities Excellent knowledge of multiple toolsets: ETL tools, reporting tools, data quality, metadata management, multiple database management systems, cloud, security, MDM tools.  (Anything Insights & Data service line may support in future.) Bachelors degree or equivalent in Computer Science, Information Systems or related field Experience in architecting, designing, developing and implementing project work within highly-visible data-driven applications in very large data warehousing / data repository environments with complex processing requirements A proven track record in system design and performance Demonstrated experience integrating systems in multi-user, multi-platform, multitasking operating systems environments Working knowledge of relational databases such as Oracle, DB2, SQL Server, etc. Ability to advocate ideas and to objectively participate in design critique Ideally the candidate should also have: Superb team building skills with a predisposition to building consensus and achieving goals through collaboration rather than direct line authority A positive, results oriented style, evidenced by listening, motivating, delegating, influencing, and monitoring the work being done Strong interpersonal/communication skills with the professional staff, senior level executives, and the business community at large - Experience delivering enterprise architecture for data & analytics for Fortune 500 companies - Ability to lead client architect leadership (CIO, Chief Architect) - Broad understanding of data platforms and tools (Cloud Platforms, Infra, Security, Data Movement, Data Engineering, Visualization, MDM, DQ, Lineage) and proven experience deploying architectures for the largest clients globally - Strong communication and facilitation skills (ability to manage workshops with 20-30 client technical resources) Ability to interface with CIO Train client on cloud, Enterprise Data Platform, Capgemini POV for Data (Business DL, Perform AI, Factory Model etc.)

Job posted by
apply for job
apply for job
Jerrin Thomas picture
Jerrin Thomas
Job posted by
Jerrin Thomas picture
Jerrin Thomas
Apply for job
apply for job

Data Engineer

Founded 2018
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Mumbai, Pune, Bengaluru (Bangalore)
Experience icon
4 - 10 years
Salary icon
Best in industryBest in industry

What is Contentstack? Contentstack combines the best Content Management System (CMS) and Digital Experience Platform (DXP) technology. It enables enterprises to manage content across all digital channels and create inimitable digital experiences. The Contentstack platform was designed from the ground up for large-scale, complex, and mission-critical deployments. Recently recognized as the Gartner PeerInsights Customers' Choice for WCM, Contentstack is the preferred API-first, headless CMS for enterprises across the globe.    What Are We Looking For? Contentstack is looking for a Data Engineer.   Roles and responsibilities: Primary responsibilities included designing and scaling ETL pipelines, and ensuring data sanity. Collaborate with multiple groups and produce operational efficiency Develop, construct, test and maintain architectures Align architecture with business requirements Identify ways to improve data reliability, efficiency and quality Optimize database systems for performance and reliability Implementation of model workflows to prepare/analyse/learn/predict and supply the outcomes through API contract(s) Establishing programming patterns, documenting components and provide infrastructure for analysis and execution Set up practices on data reporting and continuous monitoring Provide excellence, open to new ideas and contribute to communities Industrialise the data science models and embed intelligence in product & business applications Find hidden patterns using data Prepare data for predictive and prescriptive modeling Deploy sophisticated analytics programs, machine learning and statistical methods   Mandatory Skills 3+ relevant work experience as a Data Engineer Working experience in HDFS, Big table, MR, Spark, Data warehouse, ETL etc.. Advanced proficiency in Java,Scala, SQL, NoSQL Strong knowledge in Shell/Perl/R/Python/Ruby Proficiency in Statistical procedures, Experiments and Machine Learning techniques. Exceptional problem solving abilities   Job type – Full time employment Job location –  Mumbai/ Pune/ Bangalore/Remote Work schedule – Monday to Friday, 10am to 7pm Minimum qualification – Graduate. Years of experience –  3 + yearsNo of position - 2 Travel opportunities - On need basis within/outside India. Candidate should have valid passport What Really Gets Us Excited About You? Experience in working with product based start-up companies Knowledge of working with SAAS products.   What Do We Offer?   Interesting Work | We hire curious trendspotters and brave trendsetters. This is NOT your boring, routine, cushy, rest-and-vest corporate job. This is the “challenge yourself” role where you learn something new every day, never stop growing, and have fun while you’re doing it.    Tribe Vibe | We are more than colleagues, we are a tribe. We have a strict “no a**hole policy” and enforce it diligently. This means we spend time together - with spontaneous office happy hours, organized outings, and community volunteer opportunities. We are a diverse and distributed team, but we like to stay connected.   Bragging Rights | We are dreamers and dream makers, hustlers, and honeybadgers. Our efforts pay off and we work with the most prestigious brands, from big-name retailers to airlines, to professional sports teams. Your contribution will make an impact with many of the most recognizable names in almost every industry including Chase, The Miami HEAT, Cisco, Shell, Express, Riot Games, IcelandAir, Morningstar, and many more!   A Seat at the Table |  One Team One Dream is one of our values, and it shows. We don’t believe in artificial hierarchies. If you’re part of the tribe, you get a seat at the table. This includes unfiltered access to our C-Suite and regular updates about the business and its performance. Which, btw, is through the roof, so it’s a great time to be joining…

Job posted by
apply for job
apply for job
Rahul Jana picture
Rahul Jana
Job posted by
Rahul Jana picture
Rahul Jana
Apply for job
apply for job

Sr. Developer (Microstrategy)

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Bengaluru (Bangalore)
Experience icon
4 - 15 years
Salary icon
Best in industry{{renderSalaryString({min: 800000, max: 2500000, duration: "undefined", currency: "INR", equity: false})}}

4-8 years of experience in BI/DW 3+ years of experience with Microstrategy schema, design and development Experience in Microstrategy Cloud for Azure and connecting with Azure Synapse as Data Source Extensive experience in developing reports, dashboards and cubes in Microstrategy Advanced SQL coding skills Hands on development in BI reporting and performance tuning Should be able to prepare unit test cases and execute unit testing

Job posted by
apply for job
apply for job
Priyanka U picture
Priyanka U
Job posted by
Priyanka U picture
Priyanka U
Apply for job
apply for job

Data Engineer

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 6 years
Salary icon
Best in industry{{renderSalaryString({min: 800000, max: 1400000, duration: "undefined", currency: "INR", equity: false})}}

Roles and Responsibilities:• Responsible for developing and maintaining applications with PySpark  • Contribute to the overall design and architecture of the application developed and deployed. • Performance Tuning wrt to executor sizing and other environmental parameters, code optimization, partitions tuning, etc. • Interact with business users to understand requirements and troubleshoot issues. • Implement Projects based on functional specifications.Must Have Skills: • Good experience in Pyspark - Including Dataframe core functions and Spark SQL • Good experience in SQL DBs - Be able to write queries including fair complexity. • Should have excellent experience in Big Data programming for data transformation and aggregations • Good at ETL architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption. • Good customer communication. • Good Analytical skills

Job posted by
apply for job
apply for job
Sudarshini K picture
Sudarshini K
Job posted by
Sudarshini K picture
Sudarshini K
Apply for job
apply for job

Senior Support Engineer - App Ops / Data Ops

Founded 2001
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 12 years
Salary icon
Best in industryBest in industry

Required: 5-10 years of experience in Application and/or Data Operations Support domain. Expertise in doing RCA (root-cause analysis) and collaborating with development teams for CoE (correction of errors). Good communication & collaboration skills - liaison with product, operations & business teams to understand the requirements and provide data extracts & reports on need basis. Experience in working in an enterprise environment, with a good discipline & adherence to the SLA. Good understanding of the ticketing tools, to track the various requests and manage the lifecycle for multiple requests e.g. JIRA, Service-Now, Rally, Change-Gear etc. Orientation towards addressing the root-cause for any issue i.e. collaborate and follow-up with development teams to ensure permanent fix & prevention is given high priority. Ability to create SOPs (system operating procedures) in Confluence/Wiki to ensure there is a good reference for the support team to utilise. Self-starter and a collaborator having the ability to independently acquire the knowledge required in succeeding the job. Ability to mentor & lead Data Ops team-members for high quality of customer experience and resolution of issues on timely basis. Adherence to a well-defined process for workflow with partner teams. Specifically for Data Ops Engineer role, following experience is required: BI, Reporting & Data Warehousing domain Experience in production support for Data queries - monitoring, analysis & triage of issues Experience in using BI tools like MicroStrategy, Qlik, Power BI, Business Objects Expertise in data-analysis & writing SQL queries to provide insights into the production data.  Experience with relational database (RDBMS) & data-mart technologies like DB2, RedShift, SQL Server, My SQL, Netezza etc. Ability to monitor ETL jobs in AWS stack with tools like Tidal, Autosys etc. Experience with Big data platforms like Amazon RedShift Responsibilities: Production Support (Level 2) Job failures resolution - re-runs based on SOPs Report failures root-cause analysis & resolution Address queries for existing Reports & APIs Ad-hoc data requests for product & business stakeholders: Transactions per day, per entity (merchant, card-type, card-category) Custom extracts Ability to track & report the health of the system Create matrix for issue volume Coordinate and setup an escalation workflow Provide status-reports on regular basis for stakeholders review

Job posted by
apply for job
apply for job
Srinivas Avanthkar picture
Srinivas Avanthkar
Job posted by
Srinivas Avanthkar picture
Srinivas Avanthkar
Apply for job
apply for job

Database Engineer

Founded 2015
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 7 years
Salary icon
Best in industry{{renderSalaryString({min: 3000000, max: 4500000, duration: "undefined", currency: "INR", equity: false})}}

Our product is centered around lots of data being processed, ingested and read efficiently. The underlying systems need to provide capabilities update and ingest data on the order of billions of records on a daily basis. Complex analytics queries need to run on 10s of billions of rows where a single query that can potentially touch 100+ million rows needs to finish in interactive SLAs. All of this processing happens on data with several 100s of dimensions and tens of thousands of metrics.This leads to a very interesting and challenging use case in the emerging field of large scale distributed HTAP, which is still not mature enough to provide a solution out of the box that works for our scale and SLAs. So, we are building a solution that can handle the complexity of our use case and scale to several trillions of rows. As a "Database Engineer", you will evolve, architect, build and scale the core data warehouse that sits at the heart of Clarisights enabling large scale distributed, interactive analytics on near realtime data.What you'll do- Understanding and gaining expertise in existing data warehouse.- Use the above knowledge to identify gaps in the current system and formulate strategies around what can be done to fill them- Avail KPIs around the data warehouse.- Find solutions to evolve and scale the data warehouse. This will involve a lot of technical research, benchmarking and testing of existing and candidate replacement systems.- Bulid from scratch all or parts of the data warehouse to improve the KPIs.- Ensure the SLAs and SLOs of data warehouse, which will require assuming ownership and being oncall for the same.- Gain deep understanding into Linux and understand concepts that drive performance characteristics like IO scheduling, paging, processing scheduling, CPU instruction pipelining etc.- Adopt/build tooling and tune the systems to extract maximum performance out of the underlying hardware.- Build wrappers/microservices for improving visibility, control, adoption and ease of use for the data warehouse.- Build tooling and automation for monitoring, debugging and deployment of the warehouse.- Contribute to open source database technologies that are used at or are potential candidates for use.What you bringWe are looking for engineers with a strong passion for solving challenging engineering problems and a burning desire to learn and grow in a fast growing startup. This is not an easy gig, it will require strong technical chops and an insatiable curiosity to make things better. We need passionate and mature engineers who can do wonders with some mentoring and don't need to be managed.- Distributed systems: You have a good understanding of general patterns of scaling and fault-tolerance in large scale distributed systems.- Databases: You have a good understanding of database concepts like query optimization, indexing, transactions, sharding, replication etc.- Data pipelines: You have a working knowledge of distributed data processing systems.- Engineer at heart: You thrive on writing great code and have a strong appreciation for modular, testable and maintainable code, and make sure to document it. You have the ability to take new initiatives and questioning status quo.- Passion & Drive to learn and excel: You believe in our vision. You drive the product for the better, always looking to improve things, and soon become the go-to person to talk to on something that you mastered along. You love dabbling in your own side-projects and learning new skills that are not necessarily part of your normal day job.- Inquisitiveness: You are curious to know how different modules on our platform work. You are not afraid to venture into unknown territories of code. You ask questions.- Ownership: You are your own manager. You have the ability to implement engineering tasks on your own without a need for micro-management and take responsibility for any task that has been assigned to you.- Teamwork: You should be helpful and work well with teams. You’re probably someone who enjoys sharing knowledge with team-mates, asking for help when they need it.- Open Source Contribution: Bonus.

Job posted by
apply for job
apply for job
Anupran Trivedi picture
Anupran Trivedi
Job posted by
Anupran Trivedi picture
Anupran Trivedi
Apply for job
apply for job

Data Engineer

Founded 2017
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 5 years
Salary icon
Best in industry{{renderSalaryString({min: 1500000, max: 2800000, duration: "undefined", currency: "INR", equity: false})}}

Job Description We are looking for a Data Engineer that will be responsible for collecting, storing, processing, and analyzing huge sets of data that is coming from different sources. Responsibilities Working with Big Data tools and frameworks to provide requested capabilities Identify development needs in order to improve and streamline operations Develop and manage BI solutions Implementing ETL process and Data Warehousing Monitoring performance and managing infrastructure Skills Proficient understanding of distributed computing principles Proficiency with Hadoop and Spark Experience with building stream-processing systems, using solutions such as Kafka and Spark-Streaming Good knowledge of Data querying tools SQL and Hive Knowledge of various ETL techniques and frameworks Experience with Python/Java/Scala (at least one) Experience with cloud services such as AWS or GCP Experience with NoSQL databases, such as DynamoDB,MongoDB will be an advantage Excellent written and verbal communication skills

Job posted by
apply for job
apply for job
Keerthana k picture
Keerthana k
Job posted by
Keerthana k picture
Keerthana k
Apply for job
apply for job

Data Analyst

Founded 2017
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 2 years
Salary icon
Best in industry{{renderSalaryString({min: 700000, max: 1200000, duration: "undefined", currency: "INR", equity: false})}}

Skill Set SQL, Python, Numpy,Pandas,Knowledge of Hive and Data warehousing concept will be a plus point.JD - Strong analytical skills with the ability to collect, organise, analyse and interpret trends or patterns in complex data sets and provide reports & visualisations.- Work with management to prioritise business KPIs and information needs Locate and define new process improvement opportunities.- Technical expertise with data models, database design and development, data mining and segmentation techniques- Proven success in a collaborative, team-oriented environment- Working experience with geospatial data will be a plus.

Job posted by
apply for job
apply for job
Keerthana k picture
Keerthana k
Job posted by
Keerthana k picture
Keerthana k
Apply for job
apply for job

Data Engineer - Google Cloud Platform

Founded 2007
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 7 years
Salary icon
Best in industry{{renderSalaryString({min: 700000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

DESCRIPTION :- We- re looking for an experienced Data Engineer to be part of our team who has a strong cloud technology experience to help our big data team to take our products to the next level.- This is a hands-on role, you will be required to code and develop the product in addition to your leadership role. You need to have a strong software development background and love to work with cutting edge big data platforms.- You are expected to bring with you extensive hands-on experience with Amazon Web Services (Kinesis streams, EMR, Redshift), Spark and other Big Data processing frameworks and technologies as well as advanced knowledge of RDBS and Data Warehousing solutions.REQUIREMENTS :- Strong background working on large scale Data Warehousing and Data processing solutions.- Strong Python and Spark programming experience.- Strong experience in building big data pipelines.- Very strong SQL skills are an absolute must.- Good knowledge of OO, functional and procedural programming paradigms.- Strong understanding of various design patterns.- Strong understanding of data structures and algorithms.- Strong experience with Linux operating systems.- At least 2+ years of experience working as a software developer or a data-driven environment.- Experience working in an agile environment.Lots of passion, motivation and drive to succeed!Highly desirable :- Understanding of agile principles specifically scrum.- Exposure to Google cloud platform services such as BigQuery, compute engine etc.- Docker, Puppet, Ansible, etc..- Understanding of digital marketing and digital advertising space would be advantageous.BENEFITS :Datalicious is a global data technology company that helps marketers improve customer journeys through the implementation of smart data-driven marketing strategies. Our team of marketing data specialists offer a wide range of skills suitable for any challenge and cover everything from web analytics to data engineering, data science and software development.Experience : Join us at any level and we promise you'll feel up-levelled in no time, thanks to the fast-paced, transparent and aggressive growth of DataliciousExposure : Work with ONLY the best clients in the Australian and SEA markets, every problem you solve would directly impact millions of real people at a large scale across industriesWork Culture : Voted as the Top 10 Tech Companies in Australia. Never a boring day at work, and we walk the talk. The CEO organises nerf-gun bouts in the middle of a hectic day.Money: We'd love to have a long term relationship because long term benefits are exponential. We encourage people to get technical certifications via online courses or digital schools.So if you are looking for the chance to work for an innovative, fast growing business that will give you exposure across a diverse range of the world's best clients, products and industry leading technologies, then Datalicious is the company for you!

Job posted by
apply for job
apply for job
Ramjee Ganti picture
Ramjee Ganti
Job posted by
Ramjee Ganti picture
Ramjee Ganti
Apply for job
apply for job
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No spam.
File upload not supportedAudio recording not supported
This browser does not support file upload. Please follow the instructions to upload your resume.This browser does not support audio recording. Please follow the instructions to record audio.
  1. Click on the 3 dots
  2. Click on "Copy link"
  3. Open Google Chrome (or any other browser) and enter the copied link in the URL bar
Done