
About Zepo Technologies Pvt Ltd
About
Connect with the team
Similar jobs
Job Title: Senior Tableau Developer
Location: Gurgaon
Experience: 4–10 Years
Salary: Negotiable
Job Summary:
We need a Senior Tableau Developer with a minimum of 4 years to join our BI team. The ideal candidate will be responsible for designing, developing, and deploying business intelligence solutions using Tableau.
Key Responsibilities:
· Design and develop interactive and insightful Tableau dashboards and visualizations.
· Optimize dashboards for performance and usability.
· Work with SQL and data warehouses (Snowflake) to fetch and prepare clean data sets.
· Gather and analyse business requirements, translate them into functional and technical specifications.
· Collaborate with cross-functional teams to understand business KPIs and reporting needs.
· Conduct unit testing and resolve data or performance issues.
· Strong understanding of data visualization principles and best practices.
Tech. Skills Required:
· Proficient in Tableau Desktop (dashboard development, storyboards)
· Strong command of SQL (joins, subqueries, CTEs, aggregation)
· Experience with large data sets and complex queries
· Experience working on any Data warehouse (Snowflake, Redshift)
· Excellent analytical and problem-solving skills.
Mail updated resume with current salary-
Email: etalenthire[at]gmail[dot]com
Satish: 88O 27 49 743
1. Manage and support LAN/WAN infrastructure including routers, switches, firewalls, and load balancers.
2. Monitor network performance and ensure system availability and reliability.
3. Troubleshoot network issues and outages.
4. Implement security tools, policies, and procedures.
5. Maintain documentation of configurations, diagrams, and procedures.
1. Strong hands-on experience with Cisco, Juniper, or similar network equipment.
2. Familiarity with network monitoring tools (e.g.. SolarWinds, Wireshark).
3. Basic understanding of firewall and VPN configurations.
4. CCNA or equivalent certification preferred.
Company Description
BeBetta is a gamified platform designed for gamers who crave excitement, engagement, and real-world rewards. By playing games and making live predictions, users earn BetCoins, which can be redeemed for tangible prizes. Our unique approach blends gaming, predictions, and rewards, driving an immersive experience that revolutionizes user engagement. We are a high-growth, data-driven, and gamified tech startup committed to innovation and impact.
The Opportunity:
BeBetta is building the future of fan engagement. To do this, we need a backend that can handle millions of concurrent users making real-time predictions during live events. This requires a shift in our technology towards systems built for massive scale and low latency.
That’s where you come in. We are looking for a Senior Backend Engineer to lead our transition to a Go-based microservices architecture. You will be the driving force behind our most critical systems—the prediction engine, the rewards ledger, the real-time data pipelines. While our roots are in Node.js, our future is in Go, and you will be instrumental in building that future.
What You'll Achieve:
- Architect our core backend in Golang: You will design and build the services that are the backbone of the BeBetta experience, ensuring they are blazingly fast and incredibly reliable.
- Solve hard concurrency problems: You'll tackle challenges unique to real-time gaming and betting, ensuring fairness and accuracy for thousands of simultaneous user actions.
- Drive technical strategy: You will own the roadmap for evolving our architecture, including the thoughtful migration of essential services from Node.js to Go.
- Elevate the engineering bar: Through mentorship, exemplary code, and architectural leadership, you will help make our entire team better.
- Ship with impact: You will see your work go live quickly, directly enhancing the experience for our growing user base.
What You'll Bring:
- A track record of building and deploying high-performance backend systems in Golang.
- Senior-level experience (4+ years) in system design, microservices, and API development.
- Pragmatic experience with Node.js and an understanding of how to manage and migrate a monolithic or service-based system.
- Deep knowledge of database principles (PostgreSQL preferred) and high-performance data access patterns (using tools like Redis).
- Expertise in modern infrastructure: Docker, Kubernetes, and a major cloud provider (GCP/AWS).
- A strong belief that testing, observability, and clean architecture are not optional.
- An innate curiosity and a passion for solving complex problems, whether they're in code or on a whiteboard.
Why You'll Love Working Here:
This isn't just another backend role. This is a chance to put your fingerprint on the foundational technology of a fast-growing company in the exciting world of sports tech and gaming. You'll have the autonomy to make big decisions and the support of a team that's all-in on the mission.
Experience: 5-8 Years
Work Mode: Remote
Job Type: Fulltime
Mandatory Skills: Python,SQL, Snowflake, Airflow, ETL, Data Pipelines, Elastic Search, & AWS.
Role Overview:
We are looking for a talented and passionate Senior Data Engineer to join our growing data team. In this role, you will play a key part in building and scaling our data infrastructure, enabling data-driven decision-making across the organization. You will be responsible for designing, developing, and maintaining efficient and reliable data pipelines for both ELT (Extract, Load, Transform) and ETL (Extract, Transform, Load) processes.
Responsibilities:
- Design, develop, and maintain robust and scalable data pipelines for ELT and ETL processes, ensuring data accuracy, completeness, and timeliness.
- Work with stakeholders to understand data requirements and translate them into efficient data models and pipelines.
- Build and optimize data pipelines using a variety of technologies, including Elastic Search, AWS S3, Snowflake, and NFS.
- Develop and maintain data warehouse schemas and ETL/ELT processes to support business intelligence and analytics needs.
- Implement data quality checks and monitoring to ensure data integrity and identify potential issues.
- Collaborate with data scientists and analysts to ensure data accessibility and usability for various analytical purposes.
- Stay current with industry best practices, CI/CD/DevSecFinOps, Scrum and emerging technologies in data engineering.
- Contribute to the development and enhancement of our data warehouse architecture
Required Skills:
- Bachelor's degree in Computer Science, Engineering, or a related field.
- 5+ years of experience as a Data Engineer with a strong focus on ELT/ETL processes.
- At least 3+ years of exp in Snowflake data warehousing technologies.
- At least 3+ years of exp in creating and maintaining Airflow ETL pipelines.
- Minimum 3+ years of professional level experience with Python languages for data manipulation and automation.
- Working experience with Elastic Search and its application in data pipelines.
- Proficiency in SQL and experience with data modelling techniques.
- Strong understanding of cloud-based data storage solutions such as AWS S3.
- Experience working with NFS and other file storage systems.
- Excellent problem-solving and analytical skills.
- Strong communication and collaboration skills.
Job Description ReactJs
Our client is looking for talented, enthusiastic software developers who share our passion for
delivering the best user experience.
You will be the right fit, if you have a keen eye for details and high standards for code quality and
efficiency, creating innovative new features and maintaining existing ones.
Responsibilities:
Developing user interface components and implementing them following well-known React.js workflows
(such as Flux or Redux).
Building reusable components and front-end libraries for future use.
Optimizing components for maximum performance across a vast array of web-capable devices and
browsers
Integration of the front-end and back-end aspects of the web application
Requirements:
Experience in developing modern web applications using React and Redux.
Strong proficiency in CSS, JavaScript, including DOM manipulation and the JavaScript object model
Good understanding of Database schema, design, optimization, scalability.
Great communication skills, strong work ethic.
Ownership of the product from start to finish.
Ability to learn new technologies quickly.
Nice to have:
Experience with AWS
Expert level understanding of the HTML DOM and underlying event model
Prior Open source contributions
Experience building responsive designs
Hiring for SSDE/ ML.
Exp: 4-8 Yrs.
Looking immediate - 15 Days joiners
Primary Skills (Must have)
- Azure IaaS and PaaS.
- .Net (Core and Framework)
- Web API, ASP.NET, Entity Framework
- MS SQL Server
- HTML5, CSS3, Jquery and JSON
- Angular / React JS
- Unit Testing Framework for .NET using MS Test Manager or Nunit
- Unit Testing Framework for Angular or React JS
- Docker & Kubernetes
- BOT
- Should have the ability to implement solution based on the technical design document and discussions with Tech Leads / Architects.
- Should have the ability to identify and implement software development best practices.
- Should have the ability to collaborate with other team members to deliver the solution.
- Should have the abiliity to troubleshoot issues..
Interested can share your resume to gangadhar.shivarudraiah at winwire dot com
Title: Data Engineer (Azure) (Location: Gurgaon/Hyderabad)
Salary: Competitive as per Industry Standard
We are expanding our Data Engineering Team and hiring passionate professionals with extensive
knowledge and experience in building and managing large enterprise data and analytics platforms. We
are looking for creative individuals with strong programming skills, who can understand complex
business and architectural problems and develop solutions. The individual will work closely with the rest
of our data engineering and data science team in implementing and managing Scalable Smart Data
Lakes, Data Ingestion Platforms, Machine Learning and NLP based Analytics Platforms, Hyper-Scale
Processing Clusters, Data Mining and Search Engines.
What You’ll Need:
- 3+ years of industry experience in creating and managing end-to-end Data Solutions, Optimal
Data Processing Pipelines and Architecture dealing with large volume, big data sets of varied
data types.
- Proficiency in Python, Linux and shell scripting.
- Strong knowledge of working with PySpark dataframes, Pandas dataframes for writing efficient pre-processing and other data manipulation tasks.
● Strong experience in developing the infrastructure required for data ingestion, optimal
extraction, transformation, and loading of data from a wide variety of data sources using tools like Azure Data Factory, Azure Databricks (or Jupyter notebooks/ Google Colab) (or other similiar tools).
- Working knowledge of github or other version control tools.
- Experience with creating Restful web services and API platforms.
- Work with data science and infrastructure team members to implement practical machine
learning solutions and pipelines in production.
- Experience with cloud providers like Azure/AWS/GCP.
- Experience with SQL and NoSQL databases. MySQL/ Azure Cosmosdb / Hbase/MongoDB/ Elasticsearch etc.
- Experience with stream-processing systems: Spark-Streaming, Kafka etc and working experience with event driven architectures.
- Strong analytic skills related to working with unstructured datasets.
Good to have (to filter or prioritize candidates)
- Experience with testing libraries such as pytest for writing unit-tests for the developed code.
- Knowledge of Machine Learning algorithms and libraries would be good to have,
implementation experience would be an added advantage.
- Knowledge and experience of Datalake, Dockers and Kubernetes would be good to have.
- Knowledge of Azure functions , Elastic search etc will be good to have.
- Having experience with model versioning (mlflow) and data versioning will be beneficial
- Having experience with microservices libraries or with python libraries such as flask for hosting ml services and models would be great.









