Tableau Engineer

at Aideo Technologies

DP
Posted by Akshata Alekar
icon
Mumbai, Navi Mumbai
icon
3 - 8 yrs
icon
₹4L - ₹22L / yr
icon
Full time
Skills
Tableau
Natural Language Processing (NLP)
Computer Vision
Python
RESTful APIs
Microservices
Flask
SQL

We are establishing infrastructure for internal and external reporting using Tableau and are looking for someone with experience building visualizations and dashboards in Tableau and using Tableau Server to deliver them to internal and external users. 

 

Required Experience 

  • Implementation of interactive visualizations using Tableau Desktop  
  • Integration with Tableau Server and support of production dashboards and embedded reports with it 
  • Writing and optimization of SQL queries  
  • Proficient in Python including the use of Pandas and numpy libraries to perform data exploration and analysis 
  • 3  years of experience working as a Software Engineer / Senior Software Engineer 
  • Bachelors in Engineering – can be Electronic and comm , Computer , IT  
  • Well versed with Basic Data Structures Algorithms and system design 
  • Should be capable of working well in a team – and should possess very good communication skills 
  • Self-motivated and fun to work with and organized 
  • Productive and efficient working remotely 
  • Test driven mindset with a knack for finding issues and problems at earlier stages of development 
  • Interest in learning and picking up a wide range of cutting edge technologies 
  • Should be curious and interested in learning some Data science related concepts and domain knowledge 
  • Work alongside other engineers on the team to elevate technology and consistently apply best practices 

 

Highly Desirable 

  • Data Analytics 
  • Experience in AWS cloud or any cloud technologies 
  • Experience in BigData technologies and streaming like – pyspark, kafka is a big plus 
  • Shell scripting  
  • Preferred tech stack – Python, Rest API, Microservices, Flask/Fast API, pandas, numpy, linux, shell scripting, Airflow, pyspark 
  • Has a strong backend experience – and worked with Microservices and Rest API’s - Flask, FastAPI, Databases Relational and Non-relational 
Read more

About Aideo Technologies

Our propietary NLP engine uses artificial intelligence and machine learning to interpret unstructured medical notes and assign the correct CPT and ICD10 codes.
Read more
Founded
2009
Type
Product
Size
100-500 employees
Stage
Bootstrapped
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Senior Backend Engineer

at a secure data and intelligence sharing platform for Enterprises. We believe data security and privacy are paramount for AI and Machine Learning to truly evolve and embed into the world

Agency job
via HyrHub
Python
Data Structures
RESTful APIs
Design patterns
Django
Apache Kafka
pandas
TensorFlow
RabbitMQ
Amazon Web Services (AWS)
Machine Learning (ML)
DevOps
airflow
icon
Bengaluru (Bangalore)
icon
2 - 4 yrs
icon
₹13L - ₹25L / yr
As part of early stage sta
Expectations
Good experience with writing quality and mature Python code. Familiar with Python
design patterns. OOP , refactoring patterns, writing async tasks and heavy
background tasks.
Understand auth n/z, ideally worked on authorization/authentication mechanism in
python. Familiarity with Auth0 is preferred.
Understand how to secure API endpoints.
Familiar with AWS concepts on -> EC2, VPC, RDS, and IAM. (Or any cloud
equivalent)
Backend Engineer @Eder Labs 3
Have basic DevOps experience and engineering and supporting services in modern
containerized cloud stack.
Experience and understanding of docker an docker-compose.
Responsibilites
Own backend design, architecture, implementation and delivery of features and
modules.
Take ownership of the Database. Write migrations, maintain, and manage
Database. (Postgres, MongoDB.)
Collaborate with a generalist team to develop, test and launch new features. Be a
generalist and find ways and functions in to bring up your team, product and
eventually the business.
Refactoring when needed, and keep hunting for new tools that can help us as a
business (not just the engineering team)
Develop Data Pipelines, from data sourcing, wrangling (cleaning), transformations,
to eventual use
Develop MLOps systems, to take in data, analyze it, pass it through any models,
and process results. DevOps for Machine Learning.
Follow modern git oriented dev workflows, versioning, CI/CD automation and
testing.
Ideal Candidate will have :
2 years of full time experience working as a data infrastructure / core backend
engineer in a team environment.
Understanding of Machine Learning technologies, frameworks and paradigms
involved there.
Backend Engineer @Eder Labs 4
Experience with the following tools:
Fast API / Django
Airflow
Kafka / RabbitMQ
Tensorflow / Pandas / Jupyter Notebook
pytest / asyncio
Experience setting up and managing ELK stack
In depth understanding of database systems, in terms of scaling compute efficiently.
Good understanding of data streaming services, and the involved networking.
Read more
Job posted by
Shwetha Naik

Data Analyst

at Impact Guru

Founded 2014  •  Products & Services  •  100-1000 employees  •  Raised funding
Data Analysis
Business Analysis
Business Intelligence (BI)
Tableau
SQL
Google Analytics
Google Tag Manager (GTM)
MS-Excel
Data Analytics
PowerBI
Reporting
icon
Mumbai
icon
2 - 6 yrs
icon
₹3L - ₹9L / yr
Experience - 2 years to 5 years.
Location - Andheri (Mumbai)
 
Job Responsibilities:
 
Excellent problem solving and analytical skills - ability to develop hypotheses,
understand and interpret data within the context of the product / business -
solve problems and distill data into actionable recommendations.

 Strong communication skills with the ability to confidently work with cross-
functional teams across the globe and to present information to all levels of the
organization.
 Intellectual and analytical curiosity - initiative to dig into the why, what & how.
 Strong number crunching and quantitative skills.
 Advanced knowledge of MS Excel and PowerPoint.
 Good hands-on SQL
 Experience within Google Analytics, Optimize, Tag Manager and other Google Suite tools
 Understanding of Business analytics tools & statistical programming languages - R, SAS, SPSS, Tableau is a plus
 Inherent interest in e-commerce & marketplace technology platforms and broadly in the consumer Internet & mobile space.
 Previous experience of 1+ years working in a product company in a product analytics role
 Strong understanding of building and interpreting product funnels.
Read more
Job posted by
Fahad Kazi

Senior Data Engineer

at Curl Analytics

Agency job
via wrackle
ETL
Big Data
Data engineering
Apache Kafka
PySpark
Python
Pipeline management
Spark
Apache Hive
Docker
Kubernetes
MongoDB
SQL server
Oracle
Machine Learning (ML)
BigQuery
icon
Bengaluru (Bangalore)
icon
5 - 10 yrs
icon
₹15L - ₹30L / yr
What you will do
  • Bring in industry best practices around creating and maintaining robust data pipelines for complex data projects with/without AI component
    • programmatically ingesting data from several static and real-time sources (incl. web scraping)
    • rendering results through dynamic interfaces incl. web / mobile / dashboard with the ability to log usage and granular user feedbacks
    • performance tuning and optimal implementation of complex Python scripts (using SPARK), SQL (using stored procedures, HIVE), and NoSQL queries in a production environment
  • Industrialize ML / DL solutions and deploy and manage production services; proactively handle data issues arising on live apps
  • Perform ETL on large and complex datasets for AI applications - work closely with data scientists on performance optimization of large-scale ML/DL model training
  • Build data tools to facilitate fast data cleaning and statistical analysis
  • Ensure data architecture is secure and compliant
  • Resolve issues escalated from Business and Functional areas on data quality, accuracy, and availability
  • Work closely with APAC CDO and coordinate with a fully decentralized team across different locations in APAC and global HQ (Paris).

You should be

  •  Expert in structured and unstructured data in traditional and Big data environments – Oracle / SQLserver, MongoDB, Hive / Pig, BigQuery, and Spark
  • Have excellent knowledge of Python programming both in traditional and distributed models (PySpark)
  • Expert in shell scripting and writing schedulers
  • Hands-on experience with Cloud - deploying complex data solutions in hybrid cloud / on-premise environment both for data extraction/storage and computation
  • Hands-on experience in deploying production apps using large volumes of data with state-of-the-art technologies like Dockers, Kubernetes, and Kafka
  • Strong knowledge of data security best practices
  • 5+ years experience in a data engineering role
  • Science / Engineering graduate from a Tier-1 university in the country
  • And most importantly, you must be a passionate coder who really cares about building apps that can help people do things better, smarter, and faster even when they sleep
Read more
Job posted by
Naveen Taalanki
Deep Learning
Machine Learning (ML)
Data Science
Natural Language Processing (NLP)
Computer Vision
Python
NumPy
pandas
TensorFlow
MongoDB
recommendation algorithm
icon
Remote only
icon
6 - 9 yrs
icon
₹12L - ₹15L / yr
We are hiring for the Machine Learning Lead in our Data Science Team.

  • Role: Machine Learning Lead
  • Experience: 5+ Years
  • Employee strength: 80+
  • Remuneration: Most competitive in the market


Programming Language:

• Advance knowledge of Python.

• Object Oriented Programming skills.

 

Conceptual:

• Mathematical understanding of machine learning and deep learning algorithms.

• Thorough grasp on statistical terminologies.

 

Applied:

• Libraries: Tensorflow, Keras, Pytorch, Statsmodels, Scikit-learn, SciPy, Numpy, Pandas, Matplotlib, Seaborn, Plotly

• Algorithms: Ensemble Algorithms, Artificial Neural Networks and Deep Learning, Clustering Algorithms, Decision Tree Algorithms, Dimensionality Reduction Algorithms, etc.

• MySQL, MongoDB, ElasticSearch or other NoSQL database implementations.

If interested kindly share your cv at tanya @tigihr. com

 
 
Candidates preview
Read more
Job posted by
Grenisha Patel

Data Engineer

at CustomerGlu

Founded 2016  •  Products & Services  •  20-100 employees  •  Raised funding
Data engineering
Data Engineer
MongoDB
DynamoDB
Apache
Apache Kafka
Hadoop
pandas
NumPy
Python
Machine Learning (ML)
Big Data
API
Data Structures
AWS Lambda
Glue semantics
icon
Bengaluru (Bangalore)
icon
2 - 3 yrs
icon
₹8L - ₹12L / yr

CustomerGlu is a low code interactive user engagement platform. We're backed by Techstars and top-notch VCs from the US like Better Capital and SmartStart.

As we begin building repeatability in our core product offering at CustomerGlu - building a high-quality data infrastructure/applications is emerging as a key requirement to further drive more ROI from our interactive engagement programs and to also get ideas for new campaigns.

Hence we are adding more team members to our existing data team and looking for a Data Engineer.

Responsibilities

  • Design and build a high-performing data platform that is responsible for the extraction, transformation, and loading of data.
  • Develop low-latency real-time data analytics and segmentation applications.
  • Setup infrastructure for easily building data products on top of the data platform.
  • Be responsible for logging, monitoring, and error recovery of data pipelines.
  • Build workflows for automated scheduling of data transformation processes.
  • Able to lead a team

Requirements

  • 3+ years of experience and ability to manage a team
  • Experience working with databases like MongoDB and DynamoDB.
  • Knowledge of building batch data processing applications using Apache Spark.
  • Understanding of how backend services like HTTP APIs and Queues work.
  • Write good quality, maintainable code in one or more programming languages like Python, Scala, and Java.
  • Working knowledge of version control systems like Git.

Bonus Skills

  • Experience in real-time data processing using Apache Kafka or AWS Kinesis.
  • Experience with AWS tools like Lambda and Glue.
Read more
Job posted by
Barkha Budhori

Data Engineer - Python, Apache, Spark

at Spica Systems

Founded 2019  •  Products & Services  •  20-100 employees  •  Raised funding
Python
Apache Spark
icon
Kolkata
icon
3 - 5 yrs
icon
₹7L - ₹12L / yr
We are a Silicon Valley based start-up, established in 2019 and are recognized as experts in building products and providing R&D and Software Development services in wide range of leading-edge technologies such as LTE, 5G, Cloud Services (Public -AWS, AZURE,GCP,Private – Openstack) and Kubernetes. It has a highly scalable and secured 5G Packet Core Network, orchestrated by ML powered Kubernetes platform, which can be deployed in various multi cloud mode along with a test tool.Headquartered in San Jose, California, we have our R&D centre in Sector V, Salt Lake Kolkata.
 

Requirements:

  • Overall 3 to 5 years of experience in designing and implementing complex large scale Software.
  • Good in Python is must.
  • Experience in Apache Spark, Scala, Java and Delta Lake
  • Experience in designing and implementing templated ETL/ELT data pipelines
  • Expert level experience in Data Pipeline Orchestrationusing Apache Airflow for large scale production deployment
  • Experience in visualizing data from various tasks in the data pipeline using Apache Zeppelin/Plotly or any other visualization library.
  • Log management and log monitoring using ELK/Grafana
  • Git Hub Integration

 

Technology Stack: Apache Spark, Apache Airflow, Python, AWS, EC2, S3, Kubernetes, ELK, Grafana , Apache Arrow, Java

Read more
Job posted by
Priyanka Bhattacharya

Data Science Engineer (SDE I)

at Couture.ai

Founded 2017  •  Product  •  20-100 employees  •  Profitable
Spark
Algorithms
Data Structures
Scala
Machine Learning (ML)
Big Data
Hadoop
Python
icon
Bengaluru (Bangalore)
icon
1 - 3 yrs
icon
₹12L - ₹20L / yr
Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. For this role, credible display of innovation in past projects (or academia) is a must. We are looking for a candidate who lives and talks Data & Algorithms, love to play with BigData engineering, hands-on with Apache Spark, Kafka, RDBMS/NoSQL DBs, Big Data Analytics and handling Unix & Production Server. Tier-1 college (BE from IITs, BITS-Pilani, top NITs, IIITs or MS in Stanford, Berkley, CMU, UW–Madison) or exceptionally bright work history is a must. Let us know if this interests you to explore the profile further.
Read more
Job posted by
Shobhit Agarwal

Data Analyst

at Ingrainhub

Founded 2017  •  Products & Services  •  20-100 employees  •  Bootstrapped
Python
MS-Excel
R Programming
icon
Bengaluru (Bangalore)
icon
3 - 7 yrs
icon
₹3L - ₹12L / yr
Good knowledge of SQL , Microsoft Excel One Programming language in SAA/Python or R
Read more
Job posted by
Karthik Kulkarni

Etl developer

at TechChefs Software

Founded 2015  •  Services  •  100-1000 employees  •  Bootstrapped
ETL
Informatica
Python
SQL
icon
Remote, Anywhere from india
icon
5 - 10 yrs
icon
₹1L - ₹15L / yr

Responsibilities

  • Installing and configuring Informatica components, including high availability; managing server activations and de-activations for all environments; ensuring that all systems and procedures adhere to organizational best practices
  • Day to day administration of the Informatica Suite of services (PowerCenter, IDS, Metadata, Glossary and Analyst).
  • Informatica capacity planning and on-going monitoring (e.g. CPU, Memory, etc.) to proactively increase capacity as needed.
  • Manage backup and security of Data Integration Infrastructure.
  • Design, develop, and maintain all data warehouse, data marts, and ETL functions for the organization as a part of an infrastructure team.
  • Consult with users, management, vendors, and technicians to assess computing needs and system requirements.
  • Develop and interpret organizational goals, policies, and procedures.
  • Evaluate the organization's technology use and needs and recommend improvements, such as software upgrades.
  • Prepare and review operational reports or project progress reports.
  • Assist in the daily operations of the Architecture Team , analyzing workflow, establishing priorities, developing standards, and setting deadlines.
  • Work with vendors to manage support SLA’s and influence vendor product roadmap
  • Provide leadership and guidance in technical meetings, define standards and assist/provide status updates
  • Work with cross functional operations teams such as systems, storage and network to design technology stacks.

 

Preferred Qualifications

  • Minimum 6+ years’ experience as Informatica Engineer and Developer role
  • Minimum of 5+ years’ experience in an ETL environment as a developer.
  • Minimum of 5+ years of experience in SQL coding and understanding of databases
  • Proficiency in Python
  • Proficiency in command line troubleshooting
  • Proficiency in writing code in Perl/Shell scripting languages
  • Understanding of Java and concepts of Object-oriented programming
  • Good understanding of systems, networking, and storage
  • Strong knowledge of scalability and high availability
Read more
Job posted by
Shilpa Yadav

Senior Data Scientist

at Opscruise

Founded 2018  •  Product  •  20-100 employees  •  Raised funding
Data Science
Python
Machine Learning (ML)
DA
Unsupervised learning
Supervised learning
icon
Remote, Chennai
icon
9 - 25 yrs
icon
₹8L - ₹25L / yr

Responsibilities

  • Research and test novel machine learning approaches for analysing large-scale distributed computing applications.
  • Develop production-ready implementations of proposed solutions across different models AI and ML algorithms, including testing on live customer data to improve accuracy,  efficacy, and robustness
  • Work closely with other functional teams to integrate implemented systems into the SaaS platform
  • Suggest innovative and creative concepts and ideas that would improve the overall platform

Qualifications

The ideal candidate must have the following qualifications:

  • 5 + years experience in practical implementation and deployment of large customer-facing ML based systems.
  • MS or M Tech (preferred) in applied mathematics/statistics;  CS or Engineering disciplines are acceptable but must have with strong quantitative and applied mathematical skills
  • In-depth working, beyond coursework, familiarity with classical and current ML techniques, both supervised and unsupervised learning techniques and algorithms
  • Implementation experiences and deep knowledge of Classification, Time Series Analysis, Pattern Recognition, Reinforcement Learning, Deep Learning, Dynamic Programming and Optimization
  • Experience in working on modeling graph structures related to spatiotemporal systems
  • Programming skills in Python is a must
  • Experience in developing and deploying on cloud (AWS or Google or Azure)
  • Good verbal and written communication skills
  • Familiarity with well-known ML frameworks such as Pandas, Keras, TensorFlow

 

Most importantly, you should be someone who is passionate about building new and innovative products that solve tough real-world problems.

Location

Chennai, India

Read more
Job posted by
sharmila M
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Aideo Technologies?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort