keen to grow and foster a growing community, have been an influencer amongst your friends and community, team player
Lover interacting and helping people
You are a charismatic people person who can talk to anyone, you are flexible, fearless and excited to help build something awesome and share it with the world
You are motivated and understand the impact of highly satisfied, excited community.
Responsibilities
You will be powered with creating, fostering and supporting the community and inspire the growing community to share the BelonG Experience
Turn community members into passionate evangelists. Identify and engage community advocates
Monitor social media sites and actively participate in discussions across. Spread the BelonG love through creating exciting blogs and social content
Design, Develop programs that generate value for the community
Manage relationship with the community members, mentors, investors, and corporate partners

Similar jobs

About us
Blitz is into Instant Logistics in Southeast Asia. Blitz was founded in the year 2021. It is in the business of delivering orders using EV bikes. Blitz not only delivers instant orders through EV Bikes, but it also finances the EV bikes to the drivers on lease and generates another source of revenue from the leasing as well apart from delivery charges. Blitz is revolutionizing instant coordination with the help of advanced technology-based solutions. It is a product-driven company and uses modern technologies to build products that solve problems in EV-based Logistics. Blitz is utilizing data sources coming from the EV bikes through IOT and smart engines to make technology-driven decisions to create a delightful experience for consumers
About the Role
We are seeking an experienced Data Engineer to join our dynamic team. The Data Engineer will be responsible for designing, developing, and maintaining scalable data pipelines and infrastructure to support our data-driven initiatives. The ideal candidate will have a strong background in software engineering, database management, and data architecture, with a passion for building robust and efficient data systems
What you will do
- Design, build, and maintain scalable data pipelines and infrastructure to ingest, process, and analyze large volumes of structured and unstructured data.
- Collaborate with cross-functional teams to understand data requirements and develop solutions to meet business needs.
- Optimise data processing and storage solutions for performance, reliability, and cost-effectiveness.
- Implement data quality and validation processes to ensure accuracy and consistency of data.
- Monitor and troubleshoot data pipelines to identify and resolve issues in time.
- Stay updated on emerging technologies and best practices in data engineering and recommend innovations to enhance our data infrastructure.
- Document data pipelines, workflows, and infrastructure to facilitate knowledge sharing and ensure maintainability.
- Create Data Dashboards from the datasets to visualize different data requirements
What we need
- Bachelor's degree or higher in Computer Science, Engineering, or a related field.
- Proven experience as a Data Engineer or similar role, with expertise in building and maintaining data pipelines and infrastructure.
- Proficiency in programming languages such as Python, Java, or Scala.
- Strong knowledge of database systems (e.g., SQL, NoSQL, BigQuery) and data warehousing concepts.
- Experience with cloud platforms such as AWS, Azure, or Google Cloud Platform.
- Familiarity with data processing frameworks and tools (e.g., Apache, Spark, Hadoop, Kafka).
- Excellent problem-solving skills and attention to detail.
- Strong communication and collaboration skills.
Preferred Qualifications
- Advanced degree in Computer Science, Engineering, or related field.
- Experience with containerization and orchestration technologies (e.g., Docker, Kubernetes).
- Knowledge of machine learning and data analytics concepts.
- Experience with DevOps practices and tools.
- Certifications in relevant technologies (e.g., AWS Certified Big Data Specialty, Google Professional Data Engineer).
Please refer to the Company’s website - https://rideblitz.com/

Responsibilities:
You will get a chance to create products from scratch. While you will get the advantage of the scale of the organization, you are expected to come up with creative solutions to challenging problems.
On a typical day, you'd work with highly skilled engineers to solve complex problems. This is an early-stage initiative. Your ability to translate business requirements, and develop and demonstrate quick prototypes or concepts with other technology teams will be of great value.
You will learn and work on a variety of languages such as C/C++, python, and Linux as well as work on BLE, MEMS, biometric sensors, and the latest wireless technologies.
Requirements:
6+ years of Embedded firmware development experience in C/C++
BLE/GPS/GSM/RTOS stack expertise
Hands-on experience with Lab equipment (VNA/RSA/MSO etc).
Testing environment setup using automation scripts and networking equipment, practices for the full software development life cycle, including coding standards, code reviews, source control management, continuous
Familiar with Wireless/IoT network protocols and standards.
Experience with microcontrollers, sensors, and serial communication.
Preferred experience with wearOS/TizenSuperior presentation and communication skills, both written and verbal
Bachelor/Masters's degree in electrical/electronic/communications engineering, information technology, physics, or a related field from
Tier 1 Tier 2 Engineering colleges only (IITs/NITs/IIITs/BITS etc. )
Result-oriented and ready to take ownership. Exhibit strong team
Intuitive is the fastest growing top-tier Cloud Solutions and Services company supporting Global Enterprise Customer across Americas, Europe and Middle East.
Excellent communication skills
Open to work on EST time zone(6pm to 3am)
Technical Skills:
· In depth understanding of DevSecOps process and governance
· Understanding of various branching strategies
· Hands on experience working with various testing and scanning tools (ex. SonarQube, Snyk, Blackduck, etc.)
· Expertise working with one or more CICD platforms (ex. Azure DevOps, GitLab, GitHub Actions, etc)
· Expertise within one CSP and experience/working knowledge of a second CSP (Azure, AWS, GCP)
· Proficient with Terraform
· Hands on experience working with Kubernetes
· Proficient working with GIT version control
· Hands on experience working with monitoring/observability tool(s) (Splunk, Data Dog, Dynatrace, etc)
· Hands on experience working with Configuration Management platform(s) (Chef, Saltstack, Ansible, etc)
· Hands on experience with GitOps
10+ years of experience in SQL Server Administration
Should be hands on
Able to handle customer communication and expectation management
Proactively plan and implement the project/support activities
Focus on continuous improvements
Must have experience in managing team of SQL Server DBAs
should have good customer and client management skills
should have good communication skills

Position description:
- Architect and own the report automation framework using GCP’s Bigquery and any scripting language (R/Python)
- Work on the enhancement of existing and new analysis on Tableau
- Work closely with the existing team and mentor them
- Work on Ad-hoc analysis
Primary Responsibilities:
- Architect and own the report automation framework using GCP’s Bigquery and any scripting language (R/Python)
Reporting Team
- Reporting Designation: Data Science Analyst
- Reporting Department: Digital Analytics BI (2511)
Required Skills:
- Hands-on experience in relevant tools like SQL(expert), Excel, R/Python, Tableau/PowerBI
- Advanced ability to draw insights from data and clearly communicate them to the stakeholders and senior management as required
- Create and maintain optimal data pipeline architecture,
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Strong analytic skills related to working with unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- A successful history of manipulating, processing and extracting value from large disconnected datasets.
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
- Strong project management and organizational skills.
- Experience supporting and working with cross-functional teams in a dynamic environment.
- We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools: Experience with big
- data tools: Hadoop, Spark, Kafka, etc.
- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
- Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- Experience with AWS cloud services: EC2, EMR, RDS, Redshift
- Experience with stream-processing systems: Storm, Spark-Streaming, etc.
- Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.

- Knowledge in Linux Kernel development.
- Knowledge in microcontrollers
- Knowledge of Audio processing circuits based on ADC, DAC / Zynq 7Z010 / Xilinx /
- Zynq XC7Z010-2CLG400I / ARM processing / Interfacing with 24-bit Dual-channel
- Audio Codec (TLV320AIC23B)/ LInux /Uboot, Kernel Image / POSIX Environment.
- Knowledge in FPGA synthesis, simulation, and back end flows
- Bringing up Linux environment on ARM

