We are establishing infrastructure for internal and external reporting using Tableau and are looking for someone with experience building visualizations and dashboards in Tableau and using Tableau Server to deliver them to internal and external users.
Required Experience
- Implementation of interactive visualizations using Tableau Desktop
- Integration with Tableau Server and support of production dashboards and embedded reports with it
- Writing and optimization of SQL queries
- Proficient in Python including the use of Pandas and numpy libraries to perform data exploration and analysis
- 3 years of experience working as a Software Engineer / Senior Software Engineer
- Bachelors in Engineering – can be Electronic and comm , Computer , IT
- Well versed with Basic Data Structures Algorithms and system design
- Should be capable of working well in a team – and should possess very good communication skills
- Self-motivated and fun to work with and organized
- Productive and efficient working remotely
- Test driven mindset with a knack for finding issues and problems at earlier stages of development
- Interest in learning and picking up a wide range of cutting edge technologies
- Should be curious and interested in learning some Data science related concepts and domain knowledge
- Work alongside other engineers on the team to elevate technology and consistently apply best practices
Highly Desirable
- Data Analytics
- Experience in AWS cloud or any cloud technologies
- Experience in BigData technologies and streaming like – pyspark, kafka is a big plus
- Shell scripting
- Preferred tech stack – Python, Rest API, Microservices, Flask/Fast API, pandas, numpy, linux, shell scripting, Airflow, pyspark
- Has a strong backend experience – and worked with Microservices and Rest API’s - Flask, FastAPI, Databases Relational and Non-relational
About Aideo Technologies
Similar jobs
Senior Backend Engineer
at a secure data and intelligence sharing platform for Enterprises. We believe data security and privacy are paramount for AI and Machine Learning to truly evolve and embed into the world
Expectations
Good experience with writing quality and mature Python code. Familiar with Python
design patterns. OOP , refactoring patterns, writing async tasks and heavy
background tasks.
Understand auth n/z, ideally worked on authorization/authentication mechanism in
python. Familiarity with Auth0 is preferred.
Understand how to secure API endpoints.
Familiar with AWS concepts on -> EC2, VPC, RDS, and IAM. (Or any cloud
equivalent)
Backend Engineer @Eder Labs 3
Have basic DevOps experience and engineering and supporting services in modern
containerized cloud stack.
Experience and understanding of docker an docker-compose.
Responsibilites
Own backend design, architecture, implementation and delivery of features and
modules.
Take ownership of the Database. Write migrations, maintain, and manage
Database. (Postgres, MongoDB.)
Collaborate with a generalist team to develop, test and launch new features. Be a
generalist and find ways and functions in to bring up your team, product and
eventually the business.
Refactoring when needed, and keep hunting for new tools that can help us as a
business (not just the engineering team)
Develop Data Pipelines, from data sourcing, wrangling (cleaning), transformations,
to eventual use
Develop MLOps systems, to take in data, analyze it, pass it through any models,
and process results. DevOps for Machine Learning.
Follow modern git oriented dev workflows, versioning, CI/CD automation and
testing.
Ideal Candidate will have :
2 years of full time experience working as a data infrastructure / core backend
engineer in a team environment.
Understanding of Machine Learning technologies, frameworks and paradigms
involved there.
Backend Engineer @Eder Labs 4
Experience with the following tools:
Fast API / Django
Airflow
Kafka / RabbitMQ
Tensorflow / Pandas / Jupyter Notebook
pytest / asyncio
Experience setting up and managing ELK stack
In depth understanding of database systems, in terms of scaling compute efficiently.
Good understanding of data streaming services, and the involved networking.
understand and interpret data within the context of the product / business -
solve problems and distill data into actionable recommendations.
Strong communication skills with the ability to confidently work with cross-
functional teams across the globe and to present information to all levels of the
organization.
Intellectual and analytical curiosity - initiative to dig into the why, what & how.
Strong number crunching and quantitative skills.
Advanced knowledge of MS Excel and PowerPoint.
Good hands-on SQL
Experience within Google Analytics, Optimize, Tag Manager and other Google Suite tools
Understanding of Business analytics tools & statistical programming languages - R, SAS, SPSS, Tableau is a plus
Inherent interest in e-commerce & marketplace technology platforms and broadly in the consumer Internet & mobile space.
Previous experience of 1+ years working in a product company in a product analytics role
Strong understanding of building and interpreting product funnels.
- Bring in industry best practices around creating and maintaining robust data pipelines for complex data projects with/without AI component
- programmatically ingesting data from several static and real-time sources (incl. web scraping)
- rendering results through dynamic interfaces incl. web / mobile / dashboard with the ability to log usage and granular user feedbacks
- performance tuning and optimal implementation of complex Python scripts (using SPARK), SQL (using stored procedures, HIVE), and NoSQL queries in a production environment
- Industrialize ML / DL solutions and deploy and manage production services; proactively handle data issues arising on live apps
- Perform ETL on large and complex datasets for AI applications - work closely with data scientists on performance optimization of large-scale ML/DL model training
- Build data tools to facilitate fast data cleaning and statistical analysis
- Ensure data architecture is secure and compliant
- Resolve issues escalated from Business and Functional areas on data quality, accuracy, and availability
- Work closely with APAC CDO and coordinate with a fully decentralized team across different locations in APAC and global HQ (Paris).
You should be
- Expert in structured and unstructured data in traditional and Big data environments – Oracle / SQLserver, MongoDB, Hive / Pig, BigQuery, and Spark
- Have excellent knowledge of Python programming both in traditional and distributed models (PySpark)
- Expert in shell scripting and writing schedulers
- Hands-on experience with Cloud - deploying complex data solutions in hybrid cloud / on-premise environment both for data extraction/storage and computation
- Hands-on experience in deploying production apps using large volumes of data with state-of-the-art technologies like Dockers, Kubernetes, and Kafka
- Strong knowledge of data security best practices
- 5+ years experience in a data engineering role
- Science / Engineering graduate from a Tier-1 university in the country
- And most importantly, you must be a passionate coder who really cares about building apps that can help people do things better, smarter, and faster even when they sleep
- Role: Machine Learning Lead
- Experience: 5+ Years
- Employee strength: 80+
- Remuneration: Most competitive in the market
Programming Language:
• Advance knowledge of Python.
• Object Oriented Programming skills.
Conceptual:
• Mathematical understanding of machine learning and deep learning algorithms.
• Thorough grasp on statistical terminologies.
Applied:
• Libraries: Tensorflow, Keras, Pytorch, Statsmodels, Scikit-learn, SciPy, Numpy, Pandas, Matplotlib, Seaborn, Plotly
• Algorithms: Ensemble Algorithms, Artificial Neural Networks and Deep Learning, Clustering Algorithms, Decision Tree Algorithms, Dimensionality Reduction Algorithms, etc.
• MySQL, MongoDB, ElasticSearch or other NoSQL database implementations.
If interested kindly share your cv at tanya @tigihr. com
CustomerGlu is a low code interactive user engagement platform. We're backed by Techstars and top-notch VCs from the US like Better Capital and SmartStart.
As we begin building repeatability in our core product offering at CustomerGlu - building a high-quality data infrastructure/applications is emerging as a key requirement to further drive more ROI from our interactive engagement programs and to also get ideas for new campaigns.
Hence we are adding more team members to our existing data team and looking for a Data Engineer.
Responsibilities
- Design and build a high-performing data platform that is responsible for the extraction, transformation, and loading of data.
- Develop low-latency real-time data analytics and segmentation applications.
- Setup infrastructure for easily building data products on top of the data platform.
- Be responsible for logging, monitoring, and error recovery of data pipelines.
- Build workflows for automated scheduling of data transformation processes.
- Able to lead a team
Requirements
- 3+ years of experience and ability to manage a team
- Experience working with databases like MongoDB and DynamoDB.
- Knowledge of building batch data processing applications using Apache Spark.
- Understanding of how backend services like HTTP APIs and Queues work.
- Write good quality, maintainable code in one or more programming languages like Python, Scala, and Java.
- Working knowledge of version control systems like Git.
Bonus Skills
- Experience in real-time data processing using Apache Kafka or AWS Kinesis.
- Experience with AWS tools like Lambda and Glue.
Data Engineer - Python, Apache, Spark
Requirements:
- Overall 3 to 5 years of experience in designing and implementing complex large scale Software.
- Good in Python is must.
- Experience in Apache Spark, Scala, Java and Delta Lake
- Experience in designing and implementing templated ETL/ELT data pipelines
- Expert level experience in Data Pipeline Orchestrationusing Apache Airflow for large scale production deployment
- Experience in visualizing data from various tasks in the data pipeline using Apache Zeppelin/Plotly or any other visualization library.
- Log management and log monitoring using ELK/Grafana
- Git Hub Integration
Technology Stack: Apache Spark, Apache Airflow, Python, AWS, EC2, S3, Kubernetes, ELK, Grafana , Apache Arrow, Java
Responsibilities
- Installing and configuring Informatica components, including high availability; managing server activations and de-activations for all environments; ensuring that all systems and procedures adhere to organizational best practices
- Day to day administration of the Informatica Suite of services (PowerCenter, IDS, Metadata, Glossary and Analyst).
- Informatica capacity planning and on-going monitoring (e.g. CPU, Memory, etc.) to proactively increase capacity as needed.
- Manage backup and security of Data Integration Infrastructure.
- Design, develop, and maintain all data warehouse, data marts, and ETL functions for the organization as a part of an infrastructure team.
- Consult with users, management, vendors, and technicians to assess computing needs and system requirements.
- Develop and interpret organizational goals, policies, and procedures.
- Evaluate the organization's technology use and needs and recommend improvements, such as software upgrades.
- Prepare and review operational reports or project progress reports.
- Assist in the daily operations of the Architecture Team , analyzing workflow, establishing priorities, developing standards, and setting deadlines.
- Work with vendors to manage support SLA’s and influence vendor product roadmap
- Provide leadership and guidance in technical meetings, define standards and assist/provide status updates
- Work with cross functional operations teams such as systems, storage and network to design technology stacks.
Preferred Qualifications
- Minimum 6+ years’ experience as Informatica Engineer and Developer role
- Minimum of 5+ years’ experience in an ETL environment as a developer.
- Minimum of 5+ years of experience in SQL coding and understanding of databases
- Proficiency in Python
- Proficiency in command line troubleshooting
- Proficiency in writing code in Perl/Shell scripting languages
- Understanding of Java and concepts of Object-oriented programming
- Good understanding of systems, networking, and storage
- Strong knowledge of scalability and high availability
Responsibilities
- Research and test novel machine learning approaches for analysing large-scale distributed computing applications.
- Develop production-ready implementations of proposed solutions across different models AI and ML algorithms, including testing on live customer data to improve accuracy, efficacy, and robustness
- Work closely with other functional teams to integrate implemented systems into the SaaS platform
- Suggest innovative and creative concepts and ideas that would improve the overall platform
Qualifications
The ideal candidate must have the following qualifications:
- 5 + years experience in practical implementation and deployment of large customer-facing ML based systems.
- MS or M Tech (preferred) in applied mathematics/statistics; CS or Engineering disciplines are acceptable but must have with strong quantitative and applied mathematical skills
- In-depth working, beyond coursework, familiarity with classical and current ML techniques, both supervised and unsupervised learning techniques and algorithms
- Implementation experiences and deep knowledge of Classification, Time Series Analysis, Pattern Recognition, Reinforcement Learning, Deep Learning, Dynamic Programming and Optimization
- Experience in working on modeling graph structures related to spatiotemporal systems
- Programming skills in Python is a must
- Experience in developing and deploying on cloud (AWS or Google or Azure)
- Good verbal and written communication skills
- Familiarity with well-known ML frameworks such as Pandas, Keras, TensorFlow
Most importantly, you should be someone who is passionate about building new and innovative products that solve tough real-world problems.
Location
Chennai, India